Test Report: KVM_Linux_crio 17991

                    
                      2f8d23744a0abe64c36801766ae7232575880e73:2024-03-16:33591
                    
                

Test fail (31/319)

Order failed test Duration
39 TestAddons/parallel/Ingress 158.4
41 TestAddons/parallel/MetricsServer 8.5
53 TestAddons/StoppedEnableDisable 154.37
135 TestFunctional/parallel/ImageCommands/ImageListShort 2.3
172 TestMultiControlPlane/serial/StopSecondaryNode 142.15
174 TestMultiControlPlane/serial/RestartSecondaryNode 50.88
176 TestMultiControlPlane/serial/RestartClusterKeepsNodes 381.6
179 TestMultiControlPlane/serial/StopCluster 141.96
239 TestMultiNode/serial/RestartKeepsNodes 309.31
241 TestMultiNode/serial/StopMultiNode 141.59
248 TestPreload 280.01
256 TestKubernetesUpgrade 384.17
288 TestPause/serial/SecondStartNoReconfiguration 69.73
293 TestStartStop/group/old-k8s-version/serial/FirstStart 284.67
303 TestStartStop/group/no-preload/serial/Stop 141.95
305 TestStartStop/group/embed-certs/serial/Stop 142.6
308 TestStartStop/group/default-k8s-diff-port/serial/Stop 141.73
309 TestStartStop/group/old-k8s-version/serial/DeployApp 0.52
310 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 116.31
311 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
312 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
319 TestStartStop/group/old-k8s-version/serial/SecondStart 744.55
320 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.22
321 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.31
322 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.39
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.43
324 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 397.76
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 491.48
326 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 245.4
327 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 109.32
x
+
TestAddons/parallel/Ingress (158.4s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-097314 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-097314 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-097314 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [a22cc079-e09c-4d9a-b112-aadd108e8149] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [a22cc079-e09c-4d9a-b112-aadd108e8149] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 15.007695133s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-097314 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.58675549s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-097314 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.35
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-097314 addons disable ingress --alsologtostderr -v=1: (7.856048487s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-097314 -n addons-097314
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-097314 logs -n 25: (1.330090719s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-546206                                                                     | download-only-546206 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-255255                                                                     | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-465986                                                                     | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-546206                                                                     | download-only-546206 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-349079 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | binary-mirror-349079                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37207                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-349079                                                                     | binary-mirror-349079 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | addons-097314                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | addons-097314                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-097314 --wait=true                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-097314 ssh cat                                                                       | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | /opt/local-path-provisioner/pvc-c163d35d-fa3b-40ab-b865-3fb0f205250a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | -p addons-097314                                                                            |                      |         |         |                     |                     |
	| ip      | addons-097314 ip                                                                            | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-097314 addons                                                                        | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC |                     |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | addons-097314                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | -p addons-097314                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | addons-097314                                                                               |                      |         |         |                     |                     |
	| ssh     | addons-097314 ssh curl -s                                                                   | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-097314 addons                                                                        | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 23:00 UTC | 15 Mar 24 23:00 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-097314 addons                                                                        | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 23:00 UTC | 15 Mar 24 23:00 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-097314 ip                                                                            | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 23:02 UTC | 15 Mar 24 23:02 UTC |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 23:02 UTC | 15 Mar 24 23:02 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 23:02 UTC | 15 Mar 24 23:02 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 22:56:35
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 22:56:35.846979   83607 out.go:291] Setting OutFile to fd 1 ...
	I0315 22:56:35.847131   83607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:35.847142   83607 out.go:304] Setting ErrFile to fd 2...
	I0315 22:56:35.847146   83607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:35.847378   83607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 22:56:35.848119   83607 out.go:298] Setting JSON to false
	I0315 22:56:35.848997   83607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5946,"bootTime":1710537450,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 22:56:35.849063   83607 start.go:139] virtualization: kvm guest
	I0315 22:56:35.851568   83607 out.go:177] * [addons-097314] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 22:56:35.853041   83607 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 22:56:35.853179   83607 notify.go:220] Checking for updates...
	I0315 22:56:35.854546   83607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 22:56:35.856054   83607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:56:35.857407   83607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:35.858762   83607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 22:56:35.860066   83607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 22:56:35.861529   83607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 22:56:35.893553   83607 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 22:56:35.895003   83607 start.go:297] selected driver: kvm2
	I0315 22:56:35.895023   83607 start.go:901] validating driver "kvm2" against <nil>
	I0315 22:56:35.895034   83607 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 22:56:35.895747   83607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:35.895811   83607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 22:56:35.910357   83607 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 22:56:35.910406   83607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 22:56:35.910625   83607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 22:56:35.910683   83607 cni.go:84] Creating CNI manager for ""
	I0315 22:56:35.910695   83607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:56:35.910702   83607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 22:56:35.910774   83607 start.go:340] cluster config:
	{Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 22:56:35.910902   83607 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:35.912648   83607 out.go:177] * Starting "addons-097314" primary control-plane node in "addons-097314" cluster
	I0315 22:56:35.913856   83607 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 22:56:35.913886   83607 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 22:56:35.913893   83607 cache.go:56] Caching tarball of preloaded images
	I0315 22:56:35.913961   83607 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 22:56:35.913971   83607 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 22:56:35.914255   83607 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/config.json ...
	I0315 22:56:35.914275   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/config.json: {Name:mk9a389d40bfd20da607554ee69b85887d211b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:56:35.914406   83607 start.go:360] acquireMachinesLock for addons-097314: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 22:56:35.914446   83607 start.go:364] duration metric: took 27.181µs to acquireMachinesLock for "addons-097314"
	I0315 22:56:35.914463   83607 start.go:93] Provisioning new machine with config: &{Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 22:56:35.914537   83607 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 22:56:35.916140   83607 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0315 22:56:35.916256   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:56:35.916290   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:56:35.930193   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0315 22:56:35.930625   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:56:35.931170   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:56:35.931196   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:56:35.931980   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:56:35.932969   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:56:35.933163   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:56:35.933309   83607 start.go:159] libmachine.API.Create for "addons-097314" (driver="kvm2")
	I0315 22:56:35.933336   83607 client.go:168] LocalClient.Create starting
	I0315 22:56:35.933371   83607 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 22:56:36.055123   83607 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 22:56:36.219582   83607 main.go:141] libmachine: Running pre-create checks...
	I0315 22:56:36.219608   83607 main.go:141] libmachine: (addons-097314) Calling .PreCreateCheck
	I0315 22:56:36.220161   83607 main.go:141] libmachine: (addons-097314) Calling .GetConfigRaw
	I0315 22:56:36.220629   83607 main.go:141] libmachine: Creating machine...
	I0315 22:56:36.220646   83607 main.go:141] libmachine: (addons-097314) Calling .Create
	I0315 22:56:36.220811   83607 main.go:141] libmachine: (addons-097314) Creating KVM machine...
	I0315 22:56:36.222018   83607 main.go:141] libmachine: (addons-097314) DBG | found existing default KVM network
	I0315 22:56:36.222702   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.222573   83629 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0315 22:56:36.222732   83607 main.go:141] libmachine: (addons-097314) DBG | created network xml: 
	I0315 22:56:36.222743   83607 main.go:141] libmachine: (addons-097314) DBG | <network>
	I0315 22:56:36.222751   83607 main.go:141] libmachine: (addons-097314) DBG |   <name>mk-addons-097314</name>
	I0315 22:56:36.222760   83607 main.go:141] libmachine: (addons-097314) DBG |   <dns enable='no'/>
	I0315 22:56:36.222769   83607 main.go:141] libmachine: (addons-097314) DBG |   
	I0315 22:56:36.222779   83607 main.go:141] libmachine: (addons-097314) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 22:56:36.222790   83607 main.go:141] libmachine: (addons-097314) DBG |     <dhcp>
	I0315 22:56:36.222797   83607 main.go:141] libmachine: (addons-097314) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 22:56:36.222802   83607 main.go:141] libmachine: (addons-097314) DBG |     </dhcp>
	I0315 22:56:36.222809   83607 main.go:141] libmachine: (addons-097314) DBG |   </ip>
	I0315 22:56:36.222814   83607 main.go:141] libmachine: (addons-097314) DBG |   
	I0315 22:56:36.222821   83607 main.go:141] libmachine: (addons-097314) DBG | </network>
	I0315 22:56:36.222826   83607 main.go:141] libmachine: (addons-097314) DBG | 
	I0315 22:56:36.228234   83607 main.go:141] libmachine: (addons-097314) DBG | trying to create private KVM network mk-addons-097314 192.168.39.0/24...
	I0315 22:56:36.292377   83607 main.go:141] libmachine: (addons-097314) DBG | private KVM network mk-addons-097314 192.168.39.0/24 created
	I0315 22:56:36.292409   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.292330   83629 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:36.292435   83607 main.go:141] libmachine: (addons-097314) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314 ...
	I0315 22:56:36.292455   83607 main.go:141] libmachine: (addons-097314) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 22:56:36.292481   83607 main.go:141] libmachine: (addons-097314) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 22:56:36.522653   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.522538   83629 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa...
	I0315 22:56:36.716752   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.716536   83629 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/addons-097314.rawdisk...
	I0315 22:56:36.716808   83607 main.go:141] libmachine: (addons-097314) DBG | Writing magic tar header
	I0315 22:56:36.716829   83607 main.go:141] libmachine: (addons-097314) DBG | Writing SSH key tar header
	I0315 22:56:36.716842   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.716703   83629 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314 ...
	I0315 22:56:36.716858   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314 (perms=drwx------)
	I0315 22:56:36.716910   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 22:56:36.716937   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314
	I0315 22:56:36.716947   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 22:56:36.716961   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 22:56:36.716970   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 22:56:36.716986   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 22:56:36.716995   83607 main.go:141] libmachine: (addons-097314) Creating domain...
	I0315 22:56:36.717030   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 22:56:36.717066   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:36.717080   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 22:56:36.717088   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 22:56:36.717102   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins
	I0315 22:56:36.717112   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home
	I0315 22:56:36.717127   83607 main.go:141] libmachine: (addons-097314) DBG | Skipping /home - not owner
	I0315 22:56:36.718117   83607 main.go:141] libmachine: (addons-097314) define libvirt domain using xml: 
	I0315 22:56:36.718146   83607 main.go:141] libmachine: (addons-097314) <domain type='kvm'>
	I0315 22:56:36.718153   83607 main.go:141] libmachine: (addons-097314)   <name>addons-097314</name>
	I0315 22:56:36.718158   83607 main.go:141] libmachine: (addons-097314)   <memory unit='MiB'>4000</memory>
	I0315 22:56:36.718164   83607 main.go:141] libmachine: (addons-097314)   <vcpu>2</vcpu>
	I0315 22:56:36.718168   83607 main.go:141] libmachine: (addons-097314)   <features>
	I0315 22:56:36.718172   83607 main.go:141] libmachine: (addons-097314)     <acpi/>
	I0315 22:56:36.718176   83607 main.go:141] libmachine: (addons-097314)     <apic/>
	I0315 22:56:36.718181   83607 main.go:141] libmachine: (addons-097314)     <pae/>
	I0315 22:56:36.718187   83607 main.go:141] libmachine: (addons-097314)     
	I0315 22:56:36.718193   83607 main.go:141] libmachine: (addons-097314)   </features>
	I0315 22:56:36.718203   83607 main.go:141] libmachine: (addons-097314)   <cpu mode='host-passthrough'>
	I0315 22:56:36.718207   83607 main.go:141] libmachine: (addons-097314)   
	I0315 22:56:36.718214   83607 main.go:141] libmachine: (addons-097314)   </cpu>
	I0315 22:56:36.718221   83607 main.go:141] libmachine: (addons-097314)   <os>
	I0315 22:56:36.718229   83607 main.go:141] libmachine: (addons-097314)     <type>hvm</type>
	I0315 22:56:36.718237   83607 main.go:141] libmachine: (addons-097314)     <boot dev='cdrom'/>
	I0315 22:56:36.718242   83607 main.go:141] libmachine: (addons-097314)     <boot dev='hd'/>
	I0315 22:56:36.718250   83607 main.go:141] libmachine: (addons-097314)     <bootmenu enable='no'/>
	I0315 22:56:36.718254   83607 main.go:141] libmachine: (addons-097314)   </os>
	I0315 22:56:36.718259   83607 main.go:141] libmachine: (addons-097314)   <devices>
	I0315 22:56:36.718265   83607 main.go:141] libmachine: (addons-097314)     <disk type='file' device='cdrom'>
	I0315 22:56:36.718280   83607 main.go:141] libmachine: (addons-097314)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/boot2docker.iso'/>
	I0315 22:56:36.718289   83607 main.go:141] libmachine: (addons-097314)       <target dev='hdc' bus='scsi'/>
	I0315 22:56:36.718297   83607 main.go:141] libmachine: (addons-097314)       <readonly/>
	I0315 22:56:36.718301   83607 main.go:141] libmachine: (addons-097314)     </disk>
	I0315 22:56:36.718309   83607 main.go:141] libmachine: (addons-097314)     <disk type='file' device='disk'>
	I0315 22:56:36.718320   83607 main.go:141] libmachine: (addons-097314)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 22:56:36.718331   83607 main.go:141] libmachine: (addons-097314)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/addons-097314.rawdisk'/>
	I0315 22:56:36.718338   83607 main.go:141] libmachine: (addons-097314)       <target dev='hda' bus='virtio'/>
	I0315 22:56:36.718343   83607 main.go:141] libmachine: (addons-097314)     </disk>
	I0315 22:56:36.718350   83607 main.go:141] libmachine: (addons-097314)     <interface type='network'>
	I0315 22:56:36.718356   83607 main.go:141] libmachine: (addons-097314)       <source network='mk-addons-097314'/>
	I0315 22:56:36.718363   83607 main.go:141] libmachine: (addons-097314)       <model type='virtio'/>
	I0315 22:56:36.718369   83607 main.go:141] libmachine: (addons-097314)     </interface>
	I0315 22:56:36.718376   83607 main.go:141] libmachine: (addons-097314)     <interface type='network'>
	I0315 22:56:36.718381   83607 main.go:141] libmachine: (addons-097314)       <source network='default'/>
	I0315 22:56:36.718388   83607 main.go:141] libmachine: (addons-097314)       <model type='virtio'/>
	I0315 22:56:36.718393   83607 main.go:141] libmachine: (addons-097314)     </interface>
	I0315 22:56:36.718401   83607 main.go:141] libmachine: (addons-097314)     <serial type='pty'>
	I0315 22:56:36.718407   83607 main.go:141] libmachine: (addons-097314)       <target port='0'/>
	I0315 22:56:36.718414   83607 main.go:141] libmachine: (addons-097314)     </serial>
	I0315 22:56:36.718419   83607 main.go:141] libmachine: (addons-097314)     <console type='pty'>
	I0315 22:56:36.718428   83607 main.go:141] libmachine: (addons-097314)       <target type='serial' port='0'/>
	I0315 22:56:36.718435   83607 main.go:141] libmachine: (addons-097314)     </console>
	I0315 22:56:36.718440   83607 main.go:141] libmachine: (addons-097314)     <rng model='virtio'>
	I0315 22:56:36.718448   83607 main.go:141] libmachine: (addons-097314)       <backend model='random'>/dev/random</backend>
	I0315 22:56:36.718454   83607 main.go:141] libmachine: (addons-097314)     </rng>
	I0315 22:56:36.718459   83607 main.go:141] libmachine: (addons-097314)     
	I0315 22:56:36.718471   83607 main.go:141] libmachine: (addons-097314)     
	I0315 22:56:36.718482   83607 main.go:141] libmachine: (addons-097314)   </devices>
	I0315 22:56:36.718495   83607 main.go:141] libmachine: (addons-097314) </domain>
	I0315 22:56:36.718515   83607 main.go:141] libmachine: (addons-097314) 
	I0315 22:56:36.723108   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:4c:e1:0d in network default
	I0315 22:56:36.723640   83607 main.go:141] libmachine: (addons-097314) Ensuring networks are active...
	I0315 22:56:36.723660   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:36.724253   83607 main.go:141] libmachine: (addons-097314) Ensuring network default is active
	I0315 22:56:36.724487   83607 main.go:141] libmachine: (addons-097314) Ensuring network mk-addons-097314 is active
	I0315 22:56:36.724912   83607 main.go:141] libmachine: (addons-097314) Getting domain xml...
	I0315 22:56:36.725544   83607 main.go:141] libmachine: (addons-097314) Creating domain...
	I0315 22:56:37.902191   83607 main.go:141] libmachine: (addons-097314) Waiting to get IP...
	I0315 22:56:37.903004   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:37.903387   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:37.903430   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:37.903373   83629 retry.go:31] will retry after 235.474185ms: waiting for machine to come up
	I0315 22:56:38.140840   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:38.141345   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:38.141374   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:38.141307   83629 retry.go:31] will retry after 264.242261ms: waiting for machine to come up
	I0315 22:56:38.406766   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:38.407224   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:38.407251   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:38.407168   83629 retry.go:31] will retry after 360.617395ms: waiting for machine to come up
	I0315 22:56:38.769711   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:38.770095   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:38.770127   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:38.770047   83629 retry.go:31] will retry after 390.899063ms: waiting for machine to come up
	I0315 22:56:39.162804   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:39.163234   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:39.163266   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:39.163186   83629 retry.go:31] will retry after 668.450716ms: waiting for machine to come up
	I0315 22:56:39.833588   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:39.833981   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:39.834018   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:39.833918   83629 retry.go:31] will retry after 923.27146ms: waiting for machine to come up
	I0315 22:56:40.758954   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:40.759298   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:40.759348   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:40.759247   83629 retry.go:31] will retry after 1.180578271s: waiting for machine to come up
	I0315 22:56:41.941457   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:41.942001   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:41.942029   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:41.941956   83629 retry.go:31] will retry after 1.155606203s: waiting for machine to come up
	I0315 22:56:43.099358   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:43.099823   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:43.099856   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:43.099775   83629 retry.go:31] will retry after 1.855181258s: waiting for machine to come up
	I0315 22:56:44.956293   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:44.956662   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:44.956691   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:44.956619   83629 retry.go:31] will retry after 2.062737263s: waiting for machine to come up
	I0315 22:56:47.020698   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:47.021211   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:47.021243   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:47.021158   83629 retry.go:31] will retry after 1.849288333s: waiting for machine to come up
	I0315 22:56:48.873145   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:48.873573   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:48.873606   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:48.873532   83629 retry.go:31] will retry after 2.428758066s: waiting for machine to come up
	I0315 22:56:51.303807   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:51.304223   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:51.304250   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:51.304161   83629 retry.go:31] will retry after 3.707319346s: waiting for machine to come up
	I0315 22:56:55.012756   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:55.013238   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:55.013261   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:55.013188   83629 retry.go:31] will retry after 5.268140743s: waiting for machine to come up
	I0315 22:57:00.285845   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.286302   83607 main.go:141] libmachine: (addons-097314) Found IP for machine: 192.168.39.35
	I0315 22:57:00.286331   83607 main.go:141] libmachine: (addons-097314) Reserving static IP address...
	I0315 22:57:00.286344   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has current primary IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.286767   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find host DHCP lease matching {name: "addons-097314", mac: "52:54:00:63:6b:cb", ip: "192.168.39.35"} in network mk-addons-097314
	I0315 22:57:00.359137   83607 main.go:141] libmachine: (addons-097314) Reserved static IP address: 192.168.39.35
	I0315 22:57:00.359174   83607 main.go:141] libmachine: (addons-097314) DBG | Getting to WaitForSSH function...
	I0315 22:57:00.359183   83607 main.go:141] libmachine: (addons-097314) Waiting for SSH to be available...
	I0315 22:57:00.361688   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.362132   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.362167   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.362256   83607 main.go:141] libmachine: (addons-097314) DBG | Using SSH client type: external
	I0315 22:57:00.362286   83607 main.go:141] libmachine: (addons-097314) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa (-rw-------)
	I0315 22:57:00.362308   83607 main.go:141] libmachine: (addons-097314) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 22:57:00.362321   83607 main.go:141] libmachine: (addons-097314) DBG | About to run SSH command:
	I0315 22:57:00.362330   83607 main.go:141] libmachine: (addons-097314) DBG | exit 0
	I0315 22:57:00.483305   83607 main.go:141] libmachine: (addons-097314) DBG | SSH cmd err, output: <nil>: 
	I0315 22:57:00.483669   83607 main.go:141] libmachine: (addons-097314) KVM machine creation complete!
	I0315 22:57:00.483956   83607 main.go:141] libmachine: (addons-097314) Calling .GetConfigRaw
	I0315 22:57:00.484531   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:00.484758   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:00.484936   83607 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 22:57:00.484955   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:00.486174   83607 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 22:57:00.486187   83607 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 22:57:00.486193   83607 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 22:57:00.486199   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.488861   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.489203   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.489232   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.489373   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.489609   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.489803   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.489974   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.490127   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.490387   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.490400   83607 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 22:57:00.590901   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 22:57:00.590930   83607 main.go:141] libmachine: Detecting the provisioner...
	I0315 22:57:00.590939   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.593885   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.594196   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.594228   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.594337   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.594563   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.594704   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.594840   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.595043   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.595212   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.595226   83607 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 22:57:00.696110   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 22:57:00.696278   83607 main.go:141] libmachine: found compatible host: buildroot
	I0315 22:57:00.696297   83607 main.go:141] libmachine: Provisioning with buildroot...
	I0315 22:57:00.696309   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:57:00.696593   83607 buildroot.go:166] provisioning hostname "addons-097314"
	I0315 22:57:00.696615   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:57:00.696799   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.699431   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.699756   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.699786   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.699883   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.700064   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.700209   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.700331   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.700467   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.700637   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.700649   83607 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-097314 && echo "addons-097314" | sudo tee /etc/hostname
	I0315 22:57:00.813703   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-097314
	
	I0315 22:57:00.813737   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.816631   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.817125   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.817161   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.817316   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.817511   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.817664   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.817775   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.817917   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.818195   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.818225   83607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-097314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-097314/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-097314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 22:57:00.928753   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 22:57:00.928784   83607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 22:57:00.928841   83607 buildroot.go:174] setting up certificates
	I0315 22:57:00.928860   83607 provision.go:84] configureAuth start
	I0315 22:57:00.928875   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:57:00.929153   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:00.932113   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.932481   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.932515   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.932675   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.934999   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.935332   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.935362   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.935470   83607 provision.go:143] copyHostCerts
	I0315 22:57:00.935546   83607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 22:57:00.935694   83607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 22:57:00.935764   83607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 22:57:00.935812   83607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.addons-097314 san=[127.0.0.1 192.168.39.35 addons-097314 localhost minikube]
	I0315 22:57:01.227741   83607 provision.go:177] copyRemoteCerts
	I0315 22:57:01.227827   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 22:57:01.227864   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.230750   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.231043   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.231075   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.231233   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.231461   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.231652   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.231811   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.310334   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 22:57:01.337752   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 22:57:01.362073   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 22:57:01.386359   83607 provision.go:87] duration metric: took 457.484131ms to configureAuth
	I0315 22:57:01.386392   83607 buildroot.go:189] setting minikube options for container-runtime
	I0315 22:57:01.386594   83607 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:57:01.386672   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.389587   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.389974   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.390005   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.390235   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.390400   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.390546   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.390666   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.390862   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:01.391021   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:01.391035   83607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 22:57:01.659917   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 22:57:01.659951   83607 main.go:141] libmachine: Checking connection to Docker...
	I0315 22:57:01.659963   83607 main.go:141] libmachine: (addons-097314) Calling .GetURL
	I0315 22:57:01.661291   83607 main.go:141] libmachine: (addons-097314) DBG | Using libvirt version 6000000
	I0315 22:57:01.663659   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.664117   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.664146   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.664365   83607 main.go:141] libmachine: Docker is up and running!
	I0315 22:57:01.664382   83607 main.go:141] libmachine: Reticulating splines...
	I0315 22:57:01.664390   83607 client.go:171] duration metric: took 25.731043156s to LocalClient.Create
	I0315 22:57:01.664413   83607 start.go:167] duration metric: took 25.731106407s to libmachine.API.Create "addons-097314"
	I0315 22:57:01.664424   83607 start.go:293] postStartSetup for "addons-097314" (driver="kvm2")
	I0315 22:57:01.664443   83607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 22:57:01.664462   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.664681   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 22:57:01.664706   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.667056   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.667363   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.667395   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.667566   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.667737   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.667920   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.668086   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.746265   83607 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 22:57:01.750691   83607 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 22:57:01.750719   83607 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 22:57:01.750801   83607 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 22:57:01.750831   83607 start.go:296] duration metric: took 86.399698ms for postStartSetup
	I0315 22:57:01.750870   83607 main.go:141] libmachine: (addons-097314) Calling .GetConfigRaw
	I0315 22:57:01.751459   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:01.754108   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.754508   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.754530   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.754772   83607 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/config.json ...
	I0315 22:57:01.754971   83607 start.go:128] duration metric: took 25.840419899s to createHost
	I0315 22:57:01.754997   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.756951   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.757236   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.757267   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.757384   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.757553   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.757703   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.757839   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.758021   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:01.758165   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:01.758176   83607 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 22:57:01.860565   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710543421.834574284
	
	I0315 22:57:01.860596   83607 fix.go:216] guest clock: 1710543421.834574284
	I0315 22:57:01.860607   83607 fix.go:229] Guest: 2024-03-15 22:57:01.834574284 +0000 UTC Remote: 2024-03-15 22:57:01.754984136 +0000 UTC m=+25.954352188 (delta=79.590148ms)
	I0315 22:57:01.860634   83607 fix.go:200] guest clock delta is within tolerance: 79.590148ms
	I0315 22:57:01.860642   83607 start.go:83] releasing machines lock for "addons-097314", held for 25.946184894s
	I0315 22:57:01.860673   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.860978   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:01.863608   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.863995   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.864017   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.864147   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.864712   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.864891   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.864986   83607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 22:57:01.865052   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.865095   83607 ssh_runner.go:195] Run: cat /version.json
	I0315 22:57:01.865115   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.867777   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.867863   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.868206   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.868228   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.868249   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.868266   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.868442   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.868443   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.868695   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.868711   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.868909   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.868920   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.869107   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.869104   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.964885   83607 ssh_runner.go:195] Run: systemctl --version
	I0315 22:57:01.971145   83607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 22:57:02.128387   83607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 22:57:02.135913   83607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 22:57:02.136011   83607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 22:57:02.152257   83607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 22:57:02.152281   83607 start.go:494] detecting cgroup driver to use...
	I0315 22:57:02.152363   83607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 22:57:02.169043   83607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 22:57:02.183278   83607 docker.go:217] disabling cri-docker service (if available) ...
	I0315 22:57:02.183361   83607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 22:57:02.198110   83607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 22:57:02.212959   83607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 22:57:02.325860   83607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 22:57:02.459933   83607 docker.go:233] disabling docker service ...
	I0315 22:57:02.460022   83607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 22:57:02.474725   83607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 22:57:02.487639   83607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 22:57:02.620047   83607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 22:57:02.746721   83607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 22:57:02.760963   83607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 22:57:02.779731   83607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 22:57:02.779805   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.790166   83607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 22:57:02.790234   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.800430   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.810707   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.820923   83607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 22:57:02.831280   83607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 22:57:02.840562   83607 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 22:57:02.840630   83607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 22:57:02.853802   83607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 22:57:02.862950   83607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 22:57:02.980844   83607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 22:57:03.120711   83607 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 22:57:03.120818   83607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 22:57:03.126187   83607 start.go:562] Will wait 60s for crictl version
	I0315 22:57:03.126267   83607 ssh_runner.go:195] Run: which crictl
	I0315 22:57:03.130398   83607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 22:57:03.167541   83607 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 22:57:03.167638   83607 ssh_runner.go:195] Run: crio --version
	I0315 22:57:03.196364   83607 ssh_runner.go:195] Run: crio --version
	I0315 22:57:03.227014   83607 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 22:57:03.228403   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:03.231113   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:03.231522   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:03.231544   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:03.231776   83607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 22:57:03.236325   83607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 22:57:03.249536   83607 kubeadm.go:877] updating cluster {Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 22:57:03.249650   83607 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 22:57:03.249712   83607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 22:57:03.282859   83607 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 22:57:03.282929   83607 ssh_runner.go:195] Run: which lz4
	I0315 22:57:03.287274   83607 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 22:57:03.291632   83607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 22:57:03.291669   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 22:57:04.831791   83607 crio.go:444] duration metric: took 1.544556745s to copy over tarball
	I0315 22:57:04.831904   83607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 22:57:07.421547   83607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.589597094s)
	I0315 22:57:07.421597   83607 crio.go:451] duration metric: took 2.589769279s to extract the tarball
	I0315 22:57:07.421609   83607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 22:57:07.463807   83607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 22:57:07.516830   83607 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 22:57:07.516862   83607 cache_images.go:84] Images are preloaded, skipping loading
	I0315 22:57:07.516871   83607 kubeadm.go:928] updating node { 192.168.39.35 8443 v1.28.4 crio true true} ...
	I0315 22:57:07.517007   83607 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-097314 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 22:57:07.517074   83607 ssh_runner.go:195] Run: crio config
	I0315 22:57:07.577011   83607 cni.go:84] Creating CNI manager for ""
	I0315 22:57:07.577037   83607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:57:07.577052   83607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 22:57:07.577082   83607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-097314 NodeName:addons-097314 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 22:57:07.577260   83607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-097314"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 22:57:07.577345   83607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 22:57:07.587748   83607 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 22:57:07.587831   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 22:57:07.597280   83607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 22:57:07.614689   83607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 22:57:07.631989   83607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0315 22:57:07.650950   83607 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I0315 22:57:07.655173   83607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 22:57:07.667194   83607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 22:57:07.787494   83607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 22:57:07.804477   83607 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314 for IP: 192.168.39.35
	I0315 22:57:07.804509   83607 certs.go:194] generating shared ca certs ...
	I0315 22:57:07.804541   83607 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:07.804710   83607 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 22:57:07.984834   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt ...
	I0315 22:57:07.984868   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt: {Name:mk3c02333392a6c3484e85a7518b751e968d59cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:07.985054   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key ...
	I0315 22:57:07.985068   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key: {Name:mk21576eef6d3218697b62737d69b1ef1151dfed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:07.985143   83607 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 22:57:08.057203   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt ...
	I0315 22:57:08.057239   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt: {Name:mk6e95ddb451577f3d23ae9dc52b109da94def40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.057411   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key ...
	I0315 22:57:08.057424   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key: {Name:mk2ce7a41e9e2a5497efd806366764a0af769c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.057495   83607 certs.go:256] generating profile certs ...
	I0315 22:57:08.057556   83607 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.key
	I0315 22:57:08.057577   83607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt with IP's: []
	I0315 22:57:08.164041   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt ...
	I0315 22:57:08.164074   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: {Name:mk2eaa2f399cb2eaafc178b08e708540f2ded1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.164233   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.key ...
	I0315 22:57:08.164245   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.key: {Name:mkd675fdcce47d9432783a09331b990f01e8f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.164312   83607 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65
	I0315 22:57:08.164352   83607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.35]
	I0315 22:57:08.323595   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65 ...
	I0315 22:57:08.323632   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65: {Name:mk006728a1b0349596d9911ad44e9bd106cf826e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.323825   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65 ...
	I0315 22:57:08.323845   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65: {Name:mkf8f076d3e10addd5084544690433f5ba38b7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.323943   83607 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt
	I0315 22:57:08.324108   83607 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key
	I0315 22:57:08.324185   83607 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key
	I0315 22:57:08.324212   83607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt with IP's: []
	I0315 22:57:08.469102   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt ...
	I0315 22:57:08.469133   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt: {Name:mk84532477fc1d12e43e727f4f4b0d0ea6f9c99c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.469315   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key ...
	I0315 22:57:08.469335   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key: {Name:mkab0511c5c861eca417cd847473ba1dd53b7b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.469685   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 22:57:08.469740   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 22:57:08.469777   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 22:57:08.469805   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 22:57:08.470511   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 22:57:08.519837   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 22:57:08.548815   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 22:57:08.579224   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 22:57:08.603353   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0315 22:57:08.627874   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 22:57:08.653357   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 22:57:08.678971   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 22:57:08.704191   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 22:57:08.728497   83607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 22:57:08.746057   83607 ssh_runner.go:195] Run: openssl version
	I0315 22:57:08.751916   83607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 22:57:08.762904   83607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 22:57:08.767480   83607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 22:57:08.767556   83607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 22:57:08.773119   83607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 22:57:08.783476   83607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 22:57:08.787493   83607 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 22:57:08.787537   83607 kubeadm.go:391] StartCluster: {Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 22:57:08.787653   83607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 22:57:08.787707   83607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 22:57:08.822494   83607 cri.go:89] found id: ""
	I0315 22:57:08.822584   83607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 22:57:08.832679   83607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 22:57:08.842513   83607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 22:57:08.852068   83607 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 22:57:08.852093   83607 kubeadm.go:156] found existing configuration files:
	
	I0315 22:57:08.852141   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 22:57:08.860906   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 22:57:08.860961   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 22:57:08.870011   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 22:57:08.878548   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 22:57:08.878588   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 22:57:08.887629   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 22:57:08.896295   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 22:57:08.896350   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 22:57:08.905252   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 22:57:08.914336   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 22:57:08.914390   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 22:57:08.923446   83607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 22:57:09.112409   83607 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 22:57:19.250947   83607 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 22:57:19.251028   83607 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 22:57:19.251087   83607 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 22:57:19.251205   83607 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 22:57:19.251300   83607 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 22:57:19.251377   83607 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 22:57:19.253120   83607 out.go:204]   - Generating certificates and keys ...
	I0315 22:57:19.253203   83607 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 22:57:19.253258   83607 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 22:57:19.253374   83607 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 22:57:19.253464   83607 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 22:57:19.253556   83607 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 22:57:19.253642   83607 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 22:57:19.253737   83607 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 22:57:19.253908   83607 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-097314 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I0315 22:57:19.253996   83607 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 22:57:19.254139   83607 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-097314 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I0315 22:57:19.254224   83607 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 22:57:19.254324   83607 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 22:57:19.254393   83607 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 22:57:19.254480   83607 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 22:57:19.254564   83607 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 22:57:19.254666   83607 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 22:57:19.254754   83607 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 22:57:19.254829   83607 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 22:57:19.254950   83607 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 22:57:19.255044   83607 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 22:57:19.256742   83607 out.go:204]   - Booting up control plane ...
	I0315 22:57:19.256861   83607 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 22:57:19.257008   83607 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 22:57:19.257093   83607 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 22:57:19.257247   83607 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 22:57:19.257369   83607 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 22:57:19.257426   83607 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 22:57:19.257702   83607 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 22:57:19.257802   83607 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.002763 seconds
	I0315 22:57:19.257940   83607 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 22:57:19.258110   83607 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 22:57:19.258187   83607 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 22:57:19.258426   83607 kubeadm.go:309] [mark-control-plane] Marking the node addons-097314 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 22:57:19.258511   83607 kubeadm.go:309] [bootstrap-token] Using token: qikmp3.n4r8rw2ox0aq6wwt
	I0315 22:57:19.260150   83607 out.go:204]   - Configuring RBAC rules ...
	I0315 22:57:19.260305   83607 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 22:57:19.260409   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 22:57:19.260567   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 22:57:19.260705   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 22:57:19.260833   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 22:57:19.260997   83607 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 22:57:19.261166   83607 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 22:57:19.261229   83607 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 22:57:19.261276   83607 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 22:57:19.261287   83607 kubeadm.go:309] 
	I0315 22:57:19.261341   83607 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 22:57:19.261348   83607 kubeadm.go:309] 
	I0315 22:57:19.261442   83607 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 22:57:19.261454   83607 kubeadm.go:309] 
	I0315 22:57:19.261478   83607 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 22:57:19.261556   83607 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 22:57:19.261631   83607 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 22:57:19.261641   83607 kubeadm.go:309] 
	I0315 22:57:19.261712   83607 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 22:57:19.261721   83607 kubeadm.go:309] 
	I0315 22:57:19.261791   83607 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 22:57:19.261804   83607 kubeadm.go:309] 
	I0315 22:57:19.261872   83607 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 22:57:19.261996   83607 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 22:57:19.262091   83607 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 22:57:19.262106   83607 kubeadm.go:309] 
	I0315 22:57:19.262187   83607 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 22:57:19.262301   83607 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 22:57:19.262318   83607 kubeadm.go:309] 
	I0315 22:57:19.262422   83607 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qikmp3.n4r8rw2ox0aq6wwt \
	I0315 22:57:19.262566   83607 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0315 22:57:19.262600   83607 kubeadm.go:309] 	--control-plane 
	I0315 22:57:19.262606   83607 kubeadm.go:309] 
	I0315 22:57:19.262708   83607 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 22:57:19.262717   83607 kubeadm.go:309] 
	I0315 22:57:19.262815   83607 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qikmp3.n4r8rw2ox0aq6wwt \
	I0315 22:57:19.262954   83607 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0315 22:57:19.262968   83607 cni.go:84] Creating CNI manager for ""
	I0315 22:57:19.262978   83607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:57:19.264538   83607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 22:57:19.265852   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 22:57:19.294335   83607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 22:57:19.376103   83607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 22:57:19.376191   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:19.376201   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-097314 minikube.k8s.io/updated_at=2024_03_15T22_57_19_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=addons-097314 minikube.k8s.io/primary=true
	I0315 22:57:19.549271   83607 ops.go:34] apiserver oom_adj: -16
	I0315 22:57:19.549427   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:20.049527   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:20.549464   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:21.049925   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:21.550386   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:22.050106   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:22.549967   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:23.050132   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:23.549826   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:24.050381   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:24.550084   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:25.050076   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:25.550283   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:26.050183   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:26.549587   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:27.050401   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:27.550312   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:28.049583   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:28.549907   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:29.049662   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:29.549923   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:30.049542   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:30.549565   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:31.050062   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:31.550402   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:32.049494   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:32.201740   83607 kubeadm.go:1107] duration metric: took 12.825618077s to wait for elevateKubeSystemPrivileges
	W0315 22:57:32.201780   83607 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 22:57:32.201789   83607 kubeadm.go:393] duration metric: took 23.4142571s to StartCluster
	I0315 22:57:32.201807   83607 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:32.201929   83607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:57:32.202274   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:32.202454   83607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 22:57:32.202482   83607 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 22:57:32.204449   83607 out.go:177] * Verifying Kubernetes components...
	I0315 22:57:32.202573   83607 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0315 22:57:32.202667   83607 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:57:32.206096   83607 addons.go:69] Setting inspektor-gadget=true in profile "addons-097314"
	I0315 22:57:32.206112   83607 addons.go:69] Setting gcp-auth=true in profile "addons-097314"
	I0315 22:57:32.206112   83607 addons.go:69] Setting yakd=true in profile "addons-097314"
	I0315 22:57:32.206135   83607 mustload.go:65] Loading cluster: addons-097314
	I0315 22:57:32.206145   83607 addons.go:234] Setting addon inspektor-gadget=true in "addons-097314"
	I0315 22:57:32.206151   83607 addons.go:69] Setting metrics-server=true in profile "addons-097314"
	I0315 22:57:32.206147   83607 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-097314"
	I0315 22:57:32.206171   83607 addons.go:234] Setting addon metrics-server=true in "addons-097314"
	I0315 22:57:32.206183   83607 addons.go:69] Setting ingress=true in profile "addons-097314"
	I0315 22:57:32.206194   83607 addons.go:69] Setting registry=true in profile "addons-097314"
	I0315 22:57:32.206195   83607 addons.go:69] Setting ingress-dns=true in profile "addons-097314"
	I0315 22:57:32.206201   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206215   83607 addons.go:234] Setting addon ingress=true in "addons-097314"
	I0315 22:57:32.206217   83607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 22:57:32.206230   83607 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-097314"
	I0315 22:57:32.206233   83607 addons.go:234] Setting addon registry=true in "addons-097314"
	I0315 22:57:32.206249   83607 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-097314"
	I0315 22:57:32.206263   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206275   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206179   83607 addons.go:69] Setting storage-provisioner=true in profile "addons-097314"
	I0315 22:57:32.206327   83607 addons.go:234] Setting addon storage-provisioner=true in "addons-097314"
	I0315 22:57:32.206334   83607 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:57:32.206356   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206402   83607 addons.go:69] Setting volumesnapshots=true in profile "addons-097314"
	I0315 22:57:32.206424   83607 addons.go:234] Setting addon volumesnapshots=true in "addons-097314"
	I0315 22:57:32.206442   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206525   83607 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-097314"
	I0315 22:57:32.206557   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206749   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206766   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206767   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206146   83607 addons.go:234] Setting addon yakd=true in "addons-097314"
	I0315 22:57:32.206794   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206801   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206807   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206219   83607 addons.go:234] Setting addon ingress-dns=true in "addons-097314"
	I0315 22:57:32.206813   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206821   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206841   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206156   83607 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-097314"
	I0315 22:57:32.206873   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206883   83607 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-097314"
	I0315 22:57:32.206891   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206799   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206908   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206911   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206105   83607 addons.go:69] Setting cloud-spanner=true in profile "addons-097314"
	I0315 22:57:32.206975   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206979   83607 addons.go:234] Setting addon cloud-spanner=true in "addons-097314"
	I0315 22:57:32.206993   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206188   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206136   83607 addons.go:69] Setting default-storageclass=true in profile "addons-097314"
	I0315 22:57:32.206096   83607 addons.go:69] Setting helm-tiller=true in profile "addons-097314"
	I0315 22:57:32.207112   83607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-097314"
	I0315 22:57:32.207129   83607 addons.go:234] Setting addon helm-tiller=true in "addons-097314"
	I0315 22:57:32.207346   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207355   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207478   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207373   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.207371   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.207391   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207517   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207394   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.207543   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207841   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207863   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207879   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207883   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207887   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207907   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207410   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.208129   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207416   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.208265   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.227775   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43805
	I0315 22:57:32.227980   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33085
	I0315 22:57:32.228082   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I0315 22:57:32.228547   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.228569   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.228579   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.229078   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.229100   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.229081   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.229155   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.229169   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.229185   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.229501   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.229561   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.229619   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.229641   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0315 22:57:32.230126   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.230128   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.230157   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.230169   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.230329   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.230718   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.230741   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.230809   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.231074   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.236310   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43921
	I0315 22:57:32.236644   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.237146   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.237169   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.237516   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.237608   83607 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-097314"
	I0315 22:57:32.237653   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.238016   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.238025   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.238042   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.238056   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.238405   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.238441   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.269438   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0315 22:57:32.269701   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0315 22:57:32.270010   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.270212   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.270714   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.270731   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.271076   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.271293   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.271310   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.271520   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0315 22:57:32.271749   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.271793   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.271906   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.272494   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.272496   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.272548   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.272692   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.272968   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.273135   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.274940   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.275359   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.275402   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.276172   83607 addons.go:234] Setting addon default-storageclass=true in "addons-097314"
	I0315 22:57:32.276219   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.276597   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.276625   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.277627   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0315 22:57:32.277614   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0315 22:57:32.277990   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.278063   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.278428   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.278451   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.278785   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.278874   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.278903   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.279355   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.279394   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.279595   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.281448   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0315 22:57:32.282145   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.282774   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.282791   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.283426   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.283716   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.285887   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0315 22:57:32.286388   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.286931   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.286950   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.287585   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.288222   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.288263   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.288878   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.288907   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.289370   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0315 22:57:32.291253   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0315 22:57:32.291482   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.293685   83607 out.go:177]   - Using image docker.io/registry:2.8.3
	I0315 22:57:32.291963   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.292308   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.292792   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0315 22:57:32.293544   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0315 22:57:32.294104   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0315 22:57:32.296184   83607 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0315 22:57:32.297472   83607 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0315 22:57:32.297494   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0315 22:57:32.297514   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.295474   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.295519   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.295628   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.297647   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.295715   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.296312   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.297711   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.296647   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0315 22:57:32.297122   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44243
	I0315 22:57:32.298300   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.298317   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.298456   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.298469   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.298919   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.299015   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299076   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299114   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299156   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299267   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.299346   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0315 22:57:32.299753   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.299796   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.300096   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.300112   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.300128   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.300158   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.300241   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.300252   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.300364   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.300375   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.300571   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.300735   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.301256   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.301293   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.301476   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.301528   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.301662   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.301979   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.302034   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0315 22:57:32.302322   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.302410   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.302739   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.302758   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.303123   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.303691   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.303740   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.303955   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.303998   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.304235   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.304253   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.306025   83607 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0315 22:57:32.304469   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.304676   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.304724   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.305835   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.306504   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.307533   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0315 22:57:32.307549   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0315 22:57:32.307569   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.307717   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.309308   83607 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0315 22:57:32.307977   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.308067   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.310773   83607 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0315 22:57:32.310866   83607 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0315 22:57:32.310888   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.311019   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.311615   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.312199   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0315 22:57:32.312271   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0315 22:57:32.312330   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.312393   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.312479   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.312610   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.313914   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0315 22:57:32.313943   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.315151   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 22:57:32.315166   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 22:57:32.315184   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.315231   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.316693   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0315 22:57:32.315446   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.315490   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I0315 22:57:32.316971   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.319296   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0315 22:57:32.317955   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.317650   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0315 22:57:32.318229   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.318388   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.320471   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.322437   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0315 22:57:32.322495   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.321542   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.321590   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.321788   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.322028   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.321331   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.323931   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0315 22:57:32.324050   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.324117   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.324141   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.324322   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.324360   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.324512   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.325258   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0315 22:57:32.325442   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45309
	I0315 22:57:32.325841   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0315 22:57:32.325901   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.325988   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.326283   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.326295   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.326320   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I0315 22:57:32.326341   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.326340   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.326380   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.327465   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0315 22:57:32.328752   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0315 22:57:32.328768   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0315 22:57:32.328783   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.327726   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.327867   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.327916   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.328072   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.328862   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.328168   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.328902   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.328164   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.328992   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.328203   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.329231   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.329277   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.329622   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.329863   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.329907   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.330340   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0315 22:57:32.331055   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.331408   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.331435   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.331581   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.331597   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.332165   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.332334   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.334312   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.334312   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.334326   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.334346   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.334346   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.334378   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.334315   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.334399   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.336351   83607 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0315 22:57:32.334866   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.335156   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.337688   83607 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 22:57:32.337769   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0315 22:57:32.337791   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.337739   83607 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 22:57:32.339302   83607 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 22:57:32.339329   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 22:57:32.339348   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.338532   83607 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0315 22:57:32.338704   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.340624   83607 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 22:57:32.340644   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0315 22:57:32.340661   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.340719   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.342023   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I0315 22:57:32.342754   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.342952   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.343467   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.343486   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.343682   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.343706   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.343999   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.344222   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.344408   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.344998   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.345251   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.345283   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.345328   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0315 22:57:32.345440   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.345524   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.345532   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.345793   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.347450   83607 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0315 22:57:32.346064   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.348726   83607 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0315 22:57:32.348741   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0315 22:57:32.348759   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.346122   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.348809   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.346152   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.346265   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.346309   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.347362   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.347930   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.349121   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.350713   83607 out.go:177]   - Using image docker.io/busybox:stable
	I0315 22:57:32.349585   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.349609   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.350994   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0315 22:57:32.353211   83607 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0315 22:57:32.353222   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.354443   83607 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 22:57:32.354462   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0315 22:57:32.354463   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.354479   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.354484   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.352214   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.352254   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.352163   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.353854   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.354043   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.354722   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.354806   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.355378   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.355412   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.355435   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.355887   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.355905   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.356094   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.356749   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.356932   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.357507   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.359162   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0315 22:57:32.358099   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0315 22:57:32.358134   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0315 22:57:32.358674   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.359097   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.359386   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.360400   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0315 22:57:32.360408   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0315 22:57:32.360418   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.360558   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.360581   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.361199   83607 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 22:57:32.361209   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 22:57:32.361220   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.361271   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.361491   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.361594   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.361859   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.362106   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.362123   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.362182   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.362225   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.362238   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.362724   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.362890   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.363990   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.364310   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.364859   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.366798   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0315 22:57:32.365701   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.366796   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.365941   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.366205   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.366934   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.366680   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.366838   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.368310   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.366959   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.367209   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.367230   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.369430   83607 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0315 22:57:32.368283   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 22:57:32.368489   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.368508   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.370793   83607 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0315 22:57:32.370806   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0315 22:57:32.370817   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.371459   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.372168   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 22:57:32.371657   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.373571   83607 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 22:57:32.373591   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0315 22:57:32.373608   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.374099   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.374826   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.374849   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.375009   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.375220   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.375448   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.375673   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.376993   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.377415   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.377438   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.377576   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.377717   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.377849   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.377966   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	W0315 22:57:32.383985   83607 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40180->192.168.39.35:22: read: connection reset by peer
	I0315 22:57:32.384015   83607 retry.go:31] will retry after 285.366853ms: ssh: handshake failed: read tcp 192.168.39.1:40180->192.168.39.35:22: read: connection reset by peer
	I0315 22:57:32.726493   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0315 22:57:32.726519   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0315 22:57:32.842629   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 22:57:32.912616   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0315 22:57:32.913030   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 22:57:32.913056   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0315 22:57:32.914092   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0315 22:57:32.914109   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0315 22:57:32.922583   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 22:57:32.976537   83607 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0315 22:57:32.976564   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0315 22:57:32.977987   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 22:57:32.982332   83607 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0315 22:57:32.982353   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0315 22:57:32.992976   83607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 22:57:32.993916   83607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 22:57:32.994486   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0315 22:57:32.994512   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0315 22:57:32.996991   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 22:57:32.997010   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 22:57:32.997892   83607 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0315 22:57:32.997911   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0315 22:57:33.007758   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 22:57:33.051670   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 22:57:33.060836   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0315 22:57:33.060859   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0315 22:57:33.063103   83607 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0315 22:57:33.063123   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0315 22:57:33.092567   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 22:57:33.103535   83607 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0315 22:57:33.103558   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0315 22:57:33.145229   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0315 22:57:33.145258   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0315 22:57:33.168360   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 22:57:33.168386   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 22:57:33.192767   83607 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0315 22:57:33.192805   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0315 22:57:33.262222   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0315 22:57:33.262245   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0315 22:57:33.275581   83607 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0315 22:57:33.275607   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0315 22:57:33.294246   83607 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0315 22:57:33.294278   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0315 22:57:33.318841   83607 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0315 22:57:33.318865   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0315 22:57:33.476720   83607 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0315 22:57:33.476756   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0315 22:57:33.486999   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0315 22:57:33.487025   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0315 22:57:33.489628   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 22:57:33.543998   83607 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0315 22:57:33.544029   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0315 22:57:33.563190   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0315 22:57:33.563213   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0315 22:57:33.563812   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0315 22:57:33.642818   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0315 22:57:33.750675   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0315 22:57:33.750714   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0315 22:57:33.760839   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0315 22:57:33.760869   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0315 22:57:33.799639   83607 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0315 22:57:33.799665   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0315 22:57:33.836039   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0315 22:57:34.124867   83607 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 22:57:34.124891   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0315 22:57:34.157962   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0315 22:57:34.157999   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0315 22:57:34.168968   83607 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0315 22:57:34.168996   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0315 22:57:34.344508   83607 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 22:57:34.344533   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0315 22:57:34.354120   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0315 22:57:34.354146   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0315 22:57:34.463671   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 22:57:34.584035   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0315 22:57:34.584065   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0315 22:57:34.609191   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 22:57:34.824700   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0315 22:57:34.824724   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0315 22:57:34.904906   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 22:57:34.904938   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0315 22:57:35.037346   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 22:57:39.012868   83607 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0315 22:57:39.012910   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:39.016479   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.016980   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:39.017006   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.017225   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:39.017436   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:39.017658   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:39.017836   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:39.240199   83607 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0315 22:57:39.426529   83607 addons.go:234] Setting addon gcp-auth=true in "addons-097314"
	I0315 22:57:39.426587   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:39.426885   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:39.426912   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:39.443616   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0315 22:57:39.444187   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:39.444736   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:39.444770   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:39.445159   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:39.445687   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:39.445729   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:39.461509   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0315 22:57:39.461999   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:39.462492   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:39.462513   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:39.462905   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:39.463153   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:39.464855   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:39.465150   83607 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0315 22:57:39.465180   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:39.468286   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.468689   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:39.468714   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.468860   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:39.469037   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:39.469205   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:39.469368   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:39.604895   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.76221535s)
	I0315 22:57:39.604949   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.69230024s)
	I0315 22:57:39.604956   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.604969   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.605026   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.605037   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.605092   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682479131s)
	I0315 22:57:39.605129   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.627109833s)
	I0315 22:57:39.605145   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.605150   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.605158   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.605156   83607 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.612158607s)
	I0315 22:57:39.605183   83607 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.611244009s)
	I0315 22:57:39.605200   83607 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 22:57:39.605158   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606182   83607 node_ready.go:35] waiting up to 6m0s for node "addons-097314" to be "Ready" ...
	I0315 22:57:39.606405   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606415   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606428   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606429   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606449   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606454   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606458   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606462   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606471   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606477   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606494   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606524   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606532   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606540   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606545   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606547   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606580   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606588   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606602   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606609   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606770   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606808   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606814   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606904   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606933   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606940   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.607113   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.607144   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.607151   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.607236   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.607266   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.607273   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.683106   83607 node_ready.go:49] node "addons-097314" has status "Ready":"True"
	I0315 22:57:39.683143   83607 node_ready.go:38] duration metric: took 76.937145ms for node "addons-097314" to be "Ready" ...
	I0315 22:57:39.683155   83607 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 22:57:39.764002   83607 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace to be "Ready" ...
	I0315 22:57:40.235729   83607 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-097314" context rescaled to 1 replicas
	I0315 22:57:40.274084   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.22237558s)
	I0315 22:57:40.274124   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.266343388s)
	I0315 22:57:40.274146   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274161   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274146   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274224   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274485   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274500   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274522   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.274522   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274532   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274541   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274577   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274598   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.274627   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274639   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274827   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274867   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274877   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274891   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274895   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.274897   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.317503   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.317526   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.317826   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.317829   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.317852   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	W0315 22:57:40.317950   83607 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0315 22:57:40.344548   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.344570   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.344851   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.344877   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.425224   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.332615062s)
	I0315 22:57:41.425293   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425306   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425307   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.935634226s)
	I0315 22:57:41.425354   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.861499081s)
	I0315 22:57:41.425381   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425395   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.782539953s)
	I0315 22:57:41.425405   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425412   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425420   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425359   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425497   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425553   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.961847461s)
	W0315 22:57:41.425598   83607 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 22:57:41.425628   83607 retry.go:31] will retry after 277.136751ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 22:57:41.425701   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.816473394s)
	I0315 22:57:41.425456   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.589361607s)
	I0315 22:57:41.425725   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425743   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425757   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425809   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.426176   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.426218   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.426226   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.426235   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.426243   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.426318   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.426327   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.426334   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.426342   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.426400   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.426423   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.426430   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.426437   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.426444   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.427714   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427770   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427798   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.427807   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.427821   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.427823   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427861   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427897   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427922   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.427934   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427971   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.428008   83607 addons.go:470] Verifying addon ingress=true in "addons-097314"
	I0315 22:57:41.428039   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.428065   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.428096   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.428108   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.430685   83607 out.go:177] * Verifying ingress addon...
	I0315 22:57:41.427977   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427940   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.427958   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427900   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.428345   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.429106   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.431997   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.432008   83607 addons.go:470] Verifying addon registry=true in "addons-097314"
	I0315 22:57:41.432027   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.432056   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.432061   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.432069   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.433443   83607 out.go:177] * Verifying registry addon...
	I0315 22:57:41.432061   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.432318   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.432329   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.432339   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.432346   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.432868   83607 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0315 22:57:41.435026   83607 addons.go:470] Verifying addon metrics-server=true in "addons-097314"
	I0315 22:57:41.435051   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.435085   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.436419   83607 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-097314 service yakd-dashboard -n yakd-dashboard
	
	I0315 22:57:41.435811   83607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0315 22:57:41.455662   83607 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0315 22:57:41.455685   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:41.455910   83607 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0315 22:57:41.455932   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:41.703714   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 22:57:41.771797   83607 pod_ready.go:102] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:41.940160   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:41.943061   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:42.464685   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:42.465131   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:42.965969   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:42.966038   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:43.478644   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:43.500872   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:43.695921   83607 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.230739627s)
	I0315 22:57:43.697584   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 22:57:43.695892   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.658458721s)
	I0315 22:57:43.697652   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:43.697675   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:43.699190   83607 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0315 22:57:43.698033   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:43.698055   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:43.700508   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:43.700521   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:43.700532   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:43.700590   83607 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0315 22:57:43.700614   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0315 22:57:43.700827   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:43.700843   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:43.700853   83607 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-097314"
	I0315 22:57:43.702295   83607 out.go:177] * Verifying csi-hostpath-driver addon...
	I0315 22:57:43.700988   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:43.704428   83607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0315 22:57:43.722081   83607 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0315 22:57:43.722099   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:43.791575   83607 pod_ready.go:102] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:43.850975   83607 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0315 22:57:43.851001   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0315 22:57:43.941220   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:43.950479   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:43.981996   83607 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 22:57:43.982019   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0315 22:57:44.056877   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 22:57:44.214183   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:44.439731   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:44.443567   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:44.733133   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:44.885974   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.182184508s)
	I0315 22:57:44.886047   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:44.886070   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:44.886401   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:44.886447   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:44.886461   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:44.886470   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:44.886733   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:44.886733   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:44.886787   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:44.940004   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:44.948897   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:45.211742   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:45.442752   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:45.444635   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:45.715843   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:45.823043   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.76611992s)
	I0315 22:57:45.823108   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:45.823122   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:45.823498   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:45.823523   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:45.823568   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:45.823583   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:45.823588   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:45.823830   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:45.823862   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:45.823875   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:45.825805   83607 addons.go:470] Verifying addon gcp-auth=true in "addons-097314"
	I0315 22:57:45.827474   83607 out.go:177] * Verifying gcp-auth addon...
	I0315 22:57:45.829753   83607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0315 22:57:45.834716   83607 pod_ready.go:102] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:45.841885   83607 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0315 22:57:45.841911   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:45.942313   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:45.948221   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:46.210568   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:46.271712   83607 pod_ready.go:97] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.35 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-15 22:57:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:ni
l Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-15 22:57:34 +0000 UTC,FinishedAt:2024-03-15 22:57:45 +0000 UTC,ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e Started:0xc002d62fd0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0315 22:57:46.271744   83607 pod_ready.go:81] duration metric: took 6.507708877s for pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace to be "Ready" ...
	E0315 22:57:46.271757   83607 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.35 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-15 22:57:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns
State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-15 22:57:34 +0000 UTC,FinishedAt:2024-03-15 22:57:45 +0000 UTC,ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e Started:0xc002d62fd0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0315 22:57:46.271766   83607 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace to be "Ready" ...
	I0315 22:57:46.335077   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:46.440637   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:46.443706   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:46.710234   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:46.834473   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:46.940072   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:46.942369   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:47.210840   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:47.333772   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:47.440251   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:47.443630   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:47.710180   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:47.833823   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:47.940283   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:47.942433   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:48.210678   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:48.278842   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:48.334933   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:48.440325   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:48.447654   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:48.710271   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:48.833985   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:48.950160   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:48.961050   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:49.435249   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:49.443544   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:49.447797   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:49.448454   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:49.709820   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:49.834013   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:49.940253   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:49.942675   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:50.210575   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:50.334659   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:50.440595   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:50.443562   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:50.709970   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:50.778614   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:50.834033   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:50.940901   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:50.942884   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:51.211031   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:51.333189   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:51.439659   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:51.442516   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:51.709672   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:51.833573   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:51.940334   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:51.941906   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:52.210988   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:52.332970   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:52.440292   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:52.447241   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:52.712891   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:52.778804   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:52.834041   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:52.940420   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:52.942743   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:53.209973   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:53.334147   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:53.440432   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:53.443242   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:53.710399   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:53.834424   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:53.939800   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:53.943022   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:54.211055   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:54.333970   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:54.439995   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:54.443382   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:54.710820   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:54.778832   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:54.834420   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:54.940293   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:54.946021   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:55.210366   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:55.333858   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:55.440638   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:55.448312   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:55.710378   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:55.834345   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:55.939750   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:55.942756   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:56.210876   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:56.335867   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:56.441109   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:56.443129   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:56.709774   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:56.834799   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:56.941065   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:56.942421   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:57.212188   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:57.279196   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:57.349715   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:57.441881   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:57.443747   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:57.711519   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:57.834827   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:57.940953   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:57.942928   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:58.210118   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:58.345401   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:58.439308   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:58.442174   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:58.711886   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:58.834764   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:58.941653   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:58.947477   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:59.211479   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:59.280754   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:59.334694   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:59.439999   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:59.443010   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:59.711158   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:59.833360   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:59.939532   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:59.943659   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:00.225340   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:00.333886   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:00.440853   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:00.445285   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:00.710882   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:00.851055   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:00.945036   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:00.952842   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:01.210726   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:01.333553   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:01.439598   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:01.454095   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:01.711171   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:01.784565   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:01.834368   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:01.943400   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:01.945945   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:02.210391   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:02.336471   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:02.439403   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:02.442526   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:02.710903   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:02.834603   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:02.939952   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:02.943218   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:03.211209   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:03.334048   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:03.441033   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:03.449532   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:03.711529   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:03.834179   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:03.942732   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:03.942866   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:04.210552   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:04.279302   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:04.333806   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:04.439876   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:04.442937   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:04.710088   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:04.833404   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:04.939474   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:04.942266   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:05.210691   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:05.333655   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:05.451009   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:05.453933   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:05.710707   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:05.833453   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:05.939731   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:05.942852   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:06.210740   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:06.455615   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:06.455968   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:06.457656   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:06.465016   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:06.710235   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:06.833409   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:06.939775   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:06.942694   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:07.211469   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:07.334043   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:07.440566   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:07.442166   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:07.710656   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:07.834633   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:07.940618   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:07.948558   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:08.210176   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:08.334231   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:08.439773   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:08.442689   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:08.711489   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:08.779008   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:08.835175   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:08.941304   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:08.943495   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:09.209760   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:09.333401   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:09.440012   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:09.443264   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:09.710473   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:09.834018   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:09.939944   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:09.942490   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:10.232335   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:10.505039   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:10.506766   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:10.507343   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:10.710506   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:10.780066   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:10.833533   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:10.940543   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:10.941902   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:11.210702   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:11.334222   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:11.440752   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:11.443029   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:11.710085   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:11.834136   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:11.941472   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:11.942965   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:12.210533   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:12.334071   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:12.452469   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:12.452956   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:12.710983   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:12.777592   83607 pod_ready.go:92] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.777620   83607 pod_ready.go:81] duration metric: took 26.505842389s for pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.777632   83607 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.782954   83607 pod_ready.go:92] pod "etcd-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.782980   83607 pod_ready.go:81] duration metric: took 5.340052ms for pod "etcd-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.782992   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.790890   83607 pod_ready.go:92] pod "kube-apiserver-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.790911   83607 pod_ready.go:81] duration metric: took 7.911535ms for pod "kube-apiserver-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.790922   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.795880   83607 pod_ready.go:92] pod "kube-controller-manager-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.795905   83607 pod_ready.go:81] duration metric: took 4.976053ms for pod "kube-controller-manager-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.795920   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zspm2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.801086   83607 pod_ready.go:92] pod "kube-proxy-zspm2" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.801109   83607 pod_ready.go:81] duration metric: took 5.181246ms for pod "kube-proxy-zspm2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.801120   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.833350   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:12.941755   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:12.944090   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:13.175879   83607 pod_ready.go:92] pod "kube-scheduler-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:13.175907   83607 pod_ready.go:81] duration metric: took 374.779473ms for pod "kube-scheduler-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.175918   83607 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-spvr4" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.209804   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:13.333992   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:13.440801   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:13.442768   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:13.575501   83607 pod_ready.go:92] pod "metrics-server-69cf46c98-spvr4" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:13.575533   83607 pod_ready.go:81] duration metric: took 399.607151ms for pod "metrics-server-69cf46c98-spvr4" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.575546   83607 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gpjp2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.713035   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:13.836196   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:13.942723   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:13.942850   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:13.976345   83607 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gpjp2" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:13.976373   83607 pod_ready.go:81] duration metric: took 400.819164ms for pod "nvidia-device-plugin-daemonset-gpjp2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.976392   83607 pod_ready.go:38] duration metric: took 34.293217536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 22:58:13.976411   83607 api_server.go:52] waiting for apiserver process to appear ...
	I0315 22:58:13.976491   83607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 22:58:14.005538   83607 api_server.go:72] duration metric: took 41.803012887s to wait for apiserver process to appear ...
	I0315 22:58:14.005581   83607 api_server.go:88] waiting for apiserver healthz status ...
	I0315 22:58:14.005613   83607 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I0315 22:58:14.010094   83607 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I0315 22:58:14.011270   83607 api_server.go:141] control plane version: v1.28.4
	I0315 22:58:14.011293   83607 api_server.go:131] duration metric: took 5.70367ms to wait for apiserver health ...
	I0315 22:58:14.011303   83607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 22:58:14.181944   83607 system_pods.go:59] 18 kube-system pods found
	I0315 22:58:14.181982   83607 system_pods.go:61] "coredns-5dd5756b68-p6s6d" [7caaa4dc-1836-4020-b722-90edda2d212b] Running
	I0315 22:58:14.181990   83607 system_pods.go:61] "csi-hostpath-attacher-0" [ba76f6d6-961f-4d78-96ee-b5169360170f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 22:58:14.181996   83607 system_pods.go:61] "csi-hostpath-resizer-0" [a08ba8ff-3283-4356-a495-7ebfd59456b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 22:58:14.182004   83607 system_pods.go:61] "csi-hostpathplugin-5g6gq" [e6164251-098a-4dfd-9978-fdc4963327c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 22:58:14.182010   83607 system_pods.go:61] "etcd-addons-097314" [20de319c-896a-478d-943a-7a0c85f67b63] Running
	I0315 22:58:14.182014   83607 system_pods.go:61] "kube-apiserver-addons-097314" [6a75a924-4c87-47a7-9100-c16d67666cd1] Running
	I0315 22:58:14.182017   83607 system_pods.go:61] "kube-controller-manager-addons-097314" [a38fcc0b-c3ff-4356-9fc1-ea4518126611] Running
	I0315 22:58:14.182022   83607 system_pods.go:61] "kube-ingress-dns-minikube" [beb46bcd-db3c-4022-9b99-e6a29dbf5543] Running
	I0315 22:58:14.182028   83607 system_pods.go:61] "kube-proxy-zspm2" [11f770a3-08d0-4140-a786-578f0feee2bd] Running
	I0315 22:58:14.182032   83607 system_pods.go:61] "kube-scheduler-addons-097314" [bd1f348f-d044-462f-8e0d-2351e49ef9fb] Running
	I0315 22:58:14.182037   83607 system_pods.go:61] "metrics-server-69cf46c98-spvr4" [673c996b-9f13-4f55-a0da-458b3f9d201d] Running
	I0315 22:58:14.182042   83607 system_pods.go:61] "nvidia-device-plugin-daemonset-gpjp2" [2e033f82-a2e7-42b2-9052-980b0046daa3] Running
	I0315 22:58:14.182046   83607 system_pods.go:61] "registry-7bpx6" [f08323c1-5f57-4428-ab07-fa1dd1960c2c] Running
	I0315 22:58:14.182056   83607 system_pods.go:61] "registry-proxy-bp44p" [03d529e0-4bcd-4fa9-a95b-2921fe26e9cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 22:58:14.182067   83607 system_pods.go:61] "snapshot-controller-58dbcc7b99-gm4wh" [bee3baf8-f3b4-4a5d-8724-f2c5356c9d59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.182084   83607 system_pods.go:61] "snapshot-controller-58dbcc7b99-wvz4s" [0d796667-8f3f-4044-bb24-25cd1713ebc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.182088   83607 system_pods.go:61] "storage-provisioner" [9c400f5a-33f3-460d-a136-9d1ff87f0009] Running
	I0315 22:58:14.182092   83607 system_pods.go:61] "tiller-deploy-7b677967b9-5s4t7" [159edcb2-34c6-484f-b9c1-7b4d9f4cc492] Running
	I0315 22:58:14.182101   83607 system_pods.go:74] duration metric: took 170.791567ms to wait for pod list to return data ...
	I0315 22:58:14.182112   83607 default_sa.go:34] waiting for default service account to be created ...
	I0315 22:58:14.210391   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:14.334276   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:14.375271   83607 default_sa.go:45] found service account: "default"
	I0315 22:58:14.375297   83607 default_sa.go:55] duration metric: took 193.176835ms for default service account to be created ...
	I0315 22:58:14.375306   83607 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 22:58:14.439344   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:14.442508   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:14.777594   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:14.783410   83607 system_pods.go:86] 18 kube-system pods found
	I0315 22:58:14.783434   83607 system_pods.go:89] "coredns-5dd5756b68-p6s6d" [7caaa4dc-1836-4020-b722-90edda2d212b] Running
	I0315 22:58:14.783442   83607 system_pods.go:89] "csi-hostpath-attacher-0" [ba76f6d6-961f-4d78-96ee-b5169360170f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 22:58:14.783448   83607 system_pods.go:89] "csi-hostpath-resizer-0" [a08ba8ff-3283-4356-a495-7ebfd59456b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 22:58:14.783456   83607 system_pods.go:89] "csi-hostpathplugin-5g6gq" [e6164251-098a-4dfd-9978-fdc4963327c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 22:58:14.783461   83607 system_pods.go:89] "etcd-addons-097314" [20de319c-896a-478d-943a-7a0c85f67b63] Running
	I0315 22:58:14.783466   83607 system_pods.go:89] "kube-apiserver-addons-097314" [6a75a924-4c87-47a7-9100-c16d67666cd1] Running
	I0315 22:58:14.783470   83607 system_pods.go:89] "kube-controller-manager-addons-097314" [a38fcc0b-c3ff-4356-9fc1-ea4518126611] Running
	I0315 22:58:14.783473   83607 system_pods.go:89] "kube-ingress-dns-minikube" [beb46bcd-db3c-4022-9b99-e6a29dbf5543] Running
	I0315 22:58:14.783477   83607 system_pods.go:89] "kube-proxy-zspm2" [11f770a3-08d0-4140-a786-578f0feee2bd] Running
	I0315 22:58:14.783481   83607 system_pods.go:89] "kube-scheduler-addons-097314" [bd1f348f-d044-462f-8e0d-2351e49ef9fb] Running
	I0315 22:58:14.783485   83607 system_pods.go:89] "metrics-server-69cf46c98-spvr4" [673c996b-9f13-4f55-a0da-458b3f9d201d] Running
	I0315 22:58:14.783489   83607 system_pods.go:89] "nvidia-device-plugin-daemonset-gpjp2" [2e033f82-a2e7-42b2-9052-980b0046daa3] Running
	I0315 22:58:14.783492   83607 system_pods.go:89] "registry-7bpx6" [f08323c1-5f57-4428-ab07-fa1dd1960c2c] Running
	I0315 22:58:14.783497   83607 system_pods.go:89] "registry-proxy-bp44p" [03d529e0-4bcd-4fa9-a95b-2921fe26e9cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 22:58:14.783504   83607 system_pods.go:89] "snapshot-controller-58dbcc7b99-gm4wh" [bee3baf8-f3b4-4a5d-8724-f2c5356c9d59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.783510   83607 system_pods.go:89] "snapshot-controller-58dbcc7b99-wvz4s" [0d796667-8f3f-4044-bb24-25cd1713ebc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.783514   83607 system_pods.go:89] "storage-provisioner" [9c400f5a-33f3-460d-a136-9d1ff87f0009] Running
	I0315 22:58:14.783518   83607 system_pods.go:89] "tiller-deploy-7b677967b9-5s4t7" [159edcb2-34c6-484f-b9c1-7b4d9f4cc492] Running
	I0315 22:58:14.783525   83607 system_pods.go:126] duration metric: took 408.213527ms to wait for k8s-apps to be running ...
	I0315 22:58:14.783533   83607 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 22:58:14.783576   83607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 22:58:14.820486   83607 system_svc.go:56] duration metric: took 36.936432ms WaitForService to wait for kubelet
	I0315 22:58:14.820524   83607 kubeadm.go:576] duration metric: took 42.618005202s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 22:58:14.820554   83607 node_conditions.go:102] verifying NodePressure condition ...
	I0315 22:58:14.823730   83607 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 22:58:14.823757   83607 node_conditions.go:123] node cpu capacity is 2
	I0315 22:58:14.823770   83607 node_conditions.go:105] duration metric: took 3.209969ms to run NodePressure ...
	I0315 22:58:14.823780   83607 start.go:240] waiting for startup goroutines ...
	I0315 22:58:14.833729   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:14.939495   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:14.942498   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:15.210593   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:15.335294   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:15.441069   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:15.445348   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:15.710855   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:15.834496   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:15.940347   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:15.943835   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:16.210944   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:16.334244   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:16.439611   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:16.442700   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:16.710226   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:16.834432   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:16.939688   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:16.942372   83607 kapi.go:107] duration metric: took 35.506553508s to wait for kubernetes.io/minikube-addons=registry ...
	I0315 22:58:17.211777   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:17.334461   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:17.439768   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:17.711466   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:17.834089   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:17.940041   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:18.211347   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:18.334079   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:18.440635   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:18.711596   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:18.835047   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:18.940326   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:19.211203   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:19.334547   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:19.440196   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:19.710493   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:19.834307   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:19.941304   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:20.211025   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:20.334455   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:20.439513   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:20.745109   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:20.833547   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:20.940205   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:21.210662   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:21.336179   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:21.441506   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:21.710258   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:21.834669   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:21.940281   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:22.211004   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:22.336221   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:22.440703   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:22.710246   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:23.179889   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:23.180081   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:23.217461   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:23.334446   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:23.441341   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:23.709951   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:23.834880   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:23.941062   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:24.211030   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:24.334678   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:24.439502   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:24.711649   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:24.834927   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:24.939987   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:25.211261   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:25.334233   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:25.441798   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:25.710319   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:25.832945   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:25.940918   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:26.210563   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:26.334280   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:26.440043   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:26.710598   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:26.834048   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:26.941210   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:27.213520   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:27.337485   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:27.440770   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:27.711089   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:27.832811   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:27.940670   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:28.641447   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:28.642276   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:28.642465   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:28.712660   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:28.833345   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:28.940085   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:29.210768   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:29.333670   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:29.439784   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:29.710788   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:29.834192   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:29.940064   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:30.211731   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:30.334462   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:30.444604   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:30.711206   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:30.834411   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:30.942423   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:31.210363   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:31.334403   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:31.440499   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:31.713401   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:31.836146   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:31.940426   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:32.213509   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:32.341040   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:32.451987   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:32.710234   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:32.835528   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:32.940649   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:33.211893   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:33.333722   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:33.440357   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:33.713352   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:33.837831   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:33.939972   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:34.211653   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:34.333832   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:34.440191   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:34.710349   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:34.834052   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:35.356354   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:35.362516   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:35.367781   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:35.445163   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:35.716034   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:35.835001   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:35.940757   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:36.212798   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:36.334388   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:36.439736   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:36.710946   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:36.834659   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:36.944957   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:37.210153   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:37.333673   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:37.440446   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:37.711051   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:37.833946   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:37.940377   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:38.212418   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:38.334817   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:38.442713   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:38.711264   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:38.835141   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:38.943985   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:39.211276   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:39.334565   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:39.440233   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:39.716250   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:39.834029   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:39.940077   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:40.215520   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:40.342226   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:40.440739   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:40.716758   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:40.833980   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:40.941983   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:41.211214   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:41.336784   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:41.440592   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:41.710815   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:41.833809   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:41.945865   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:42.212258   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:42.336092   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:42.440013   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:42.711188   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:42.833776   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:42.940217   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:43.212329   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:43.833808   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:43.842294   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:43.849549   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:43.863811   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:43.940665   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:44.210598   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:44.334066   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:44.440226   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:44.709727   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:44.833911   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:44.941529   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:45.212096   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:45.334072   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:45.439636   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:45.711526   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:45.833997   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:45.941054   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:46.211035   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:46.334394   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:46.439882   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:46.721125   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:46.833922   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:46.940509   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:47.210829   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:47.334527   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:47.440801   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:47.710529   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:47.832966   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:47.940904   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:48.210496   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:48.333346   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:48.439560   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:48.711305   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:48.834963   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:48.940050   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:49.210703   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:49.336040   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:49.444316   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:50.063252   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:50.063316   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:50.069865   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:50.214339   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:50.334052   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:50.441040   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:50.709683   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:50.833692   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:50.939920   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:51.210100   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:51.333887   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:51.440435   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:51.710930   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:51.834578   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:51.939634   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:52.210952   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:52.335621   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:52.440153   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:52.710733   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:52.834137   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:53.112211   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:53.210149   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:53.334629   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:53.441743   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:53.711461   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:53.835593   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:53.943247   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:54.214353   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:54.334227   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:54.439415   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:54.711302   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:54.833885   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:54.940022   83607 kapi.go:107] duration metric: took 1m13.507147774s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0315 22:58:55.210033   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:55.336941   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:55.711193   83607 kapi.go:107] duration metric: took 1m12.006762799s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0315 22:58:55.834447   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:56.334245   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:56.834100   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:57.333371   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:57.930329   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:58.334307   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:58.834898   83607 kapi.go:107] duration metric: took 1m13.005144765s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0315 22:58:58.836671   83607 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-097314 cluster.
	I0315 22:58:58.838043   83607 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0315 22:58:58.839360   83607 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0315 22:58:58.840666   83607 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0315 22:58:58.841909   83607 addons.go:505] duration metric: took 1m26.639340711s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0315 22:58:58.841948   83607 start.go:245] waiting for cluster config update ...
	I0315 22:58:58.841976   83607 start.go:254] writing updated cluster config ...
	I0315 22:58:58.842243   83607 ssh_runner.go:195] Run: rm -f paused
	I0315 22:58:58.893207   83607 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0315 22:58:58.894958   83607 out.go:177] * Done! kubectl is now configured to use "addons-097314" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.920151942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710543734920121802,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44348728-7e01-4b01-8fc3-2043cc1f9d42 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.920812938Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e3a8aea7-dc65-40fa-9cf6-5827e1d85e08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.920965278Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e3a8aea7-dc65-40fa-9cf6-5827e1d85e08 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.921392216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fe5bd57848870422f906e9eca78cfbc55e4e12b25137ec2441e0e76b891e1d4,PodSandboxId:8643f6191a0646b560a78dfc4451715425550a666941cb0fcf0406abc74132f9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710543727719776381,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jdpj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e96233cd-9cca-400f-b651-d1d222622ec7,},Annotations:map[string]string{io.kubernetes.container.hash: bda62444,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60fc446ea13582fce68669d615b3adb955e402de16d2c4c4d4bddf9ba080a23,PodSandboxId:88542fce8b77b6ecf892a5e5136d295c98e576c836c06035f4ed51f1795f7956,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710543586548167204,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a22cc079-e09c-4d9a-b112-aadd108e8149,},Annotations:map[string]string{io.kubern
etes.container.hash: e3f163ce,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e238683be105df71a217c901668c0c9a81f8fe11d6f7faf28d148898c64c6,PodSandboxId:302bffa2f4e98cf9162261bdbd3f463d4db07b68eedf3e0ebdd9720f56a46429,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710543576711480902,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-6q49r,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1830ee4b-4662-4561-b106-65e211f79e01,},Annotations:map[string]string{io.kubernetes.container.hash: 3c104721,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d,PodSandboxId:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710543538005671829,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,},Annotations:map[string]string{io.kubernetes.container.hash: a64da723,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf304fd980426d954707f8a64d861b997836692164cd559753666f089ad6d9eb,PodSandboxId:570bc8c2c75ef357c570d661fe907391800187b9debbbaefcbed4e12e35765e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710543515552600137,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f89sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fb6ef82-c69f-4938-9750-703821777bae,},Annotations:map[string]string{io.kubernetes.container.hash: 34101aa2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffc2a54ba4a4570cea0ffee96ae3627173ef2355c9f17ed39fc4252752c479c,PodSandboxId:c018dddd54c691afb0b93342568fa0dbe18ce5c7c3d77e0e1059032240ad3496,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710543512373947994,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tdwdz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 486fc903-f174-4206-91d6-a9744dcbfb23,},Annotations:map[string]string{io.kubernetes.container.hash: 2954f0a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e324c32f75a06a80a991cb9e626e87d6fc5ebb8a8bde9cecfc8534894d2a100,PodSandboxId:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710543503304620706,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,},Annotations:map[string]string{io.kubernetes.container.hash: e0e945f1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85,PodSandboxId:00183462ad3c8237be611446ea62596a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa33
7bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710543478088923910,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,},Annotations:map[string]string{io.kubernetes.container.hash: 667274d8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78,PodSandboxId:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710543462474927371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{io.kubernetes.container.hash: 506fb218,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23,PodSandboxId:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710543453910561060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,},Annotations:map[string]string{io.kubernetes.container.hash: 98b00e74,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3,PodSandboxId:5ef770e5867f87e50a902a6e9baaa2f8b75ab65acbe55931f3fa31caedb55e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710543452759732400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4140-a786-578f0feee2bd,},Annotations:map[string]string{io.kubernetes.container.hash: f8e87be9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05e642c2e04d68f8ddb17d589ac545b7
d6e6455cc4cbb87ea05be405497d75c,PodSandboxId:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710543433858697290,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9790ed80546000baa27b620fb3443e56,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8e1034ff660949fd2f7a
2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a,PodSandboxId:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710543433816571257,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,},Annotations:map[string]string{io.kubernetes.container.hash: 2f963277,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e,PodSa
ndboxId:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710543433886749379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939,PodSandboxId:d02d2c9bf
cd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710543433785318568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,},Annotations:map[string]string{io.kubernetes.container.hash: 771cd21a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e3a8aea7-dc65-40fa-9cf6-5827e1d85e08 name=/runtime.v1.RuntimeService/
ListContainers
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.961083128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d23e566-6b98-4e39-bcfe-050f60895630 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.961545595Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d23e566-6b98-4e39-bcfe-050f60895630 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.963559733Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf2c9faf-682d-41ca-bd25-b6a9401a5877 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.965166714Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710543734965132454,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf2c9faf-682d-41ca-bd25-b6a9401a5877 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.965665906Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0acabcd9-2c14-49d6-b682-85228982cce2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.965743610Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0acabcd9-2c14-49d6-b682-85228982cce2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:14 addons-097314 crio[671]: time="2024-03-15 23:02:14.966183103Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fe5bd57848870422f906e9eca78cfbc55e4e12b25137ec2441e0e76b891e1d4,PodSandboxId:8643f6191a0646b560a78dfc4451715425550a666941cb0fcf0406abc74132f9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710543727719776381,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jdpj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e96233cd-9cca-400f-b651-d1d222622ec7,},Annotations:map[string]string{io.kubernetes.container.hash: bda62444,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60fc446ea13582fce68669d615b3adb955e402de16d2c4c4d4bddf9ba080a23,PodSandboxId:88542fce8b77b6ecf892a5e5136d295c98e576c836c06035f4ed51f1795f7956,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710543586548167204,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a22cc079-e09c-4d9a-b112-aadd108e8149,},Annotations:map[string]string{io.kubern
etes.container.hash: e3f163ce,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e238683be105df71a217c901668c0c9a81f8fe11d6f7faf28d148898c64c6,PodSandboxId:302bffa2f4e98cf9162261bdbd3f463d4db07b68eedf3e0ebdd9720f56a46429,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710543576711480902,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-6q49r,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1830ee4b-4662-4561-b106-65e211f79e01,},Annotations:map[string]string{io.kubernetes.container.hash: 3c104721,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d,PodSandboxId:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710543538005671829,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,},Annotations:map[string]string{io.kubernetes.container.hash: a64da723,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf304fd980426d954707f8a64d861b997836692164cd559753666f089ad6d9eb,PodSandboxId:570bc8c2c75ef357c570d661fe907391800187b9debbbaefcbed4e12e35765e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710543515552600137,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f89sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fb6ef82-c69f-4938-9750-703821777bae,},Annotations:map[string]string{io.kubernetes.container.hash: 34101aa2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffc2a54ba4a4570cea0ffee96ae3627173ef2355c9f17ed39fc4252752c479c,PodSandboxId:c018dddd54c691afb0b93342568fa0dbe18ce5c7c3d77e0e1059032240ad3496,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710543512373947994,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tdwdz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 486fc903-f174-4206-91d6-a9744dcbfb23,},Annotations:map[string]string{io.kubernetes.container.hash: 2954f0a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e324c32f75a06a80a991cb9e626e87d6fc5ebb8a8bde9cecfc8534894d2a100,PodSandboxId:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710543503304620706,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,},Annotations:map[string]string{io.kubernetes.container.hash: e0e945f1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85,PodSandboxId:00183462ad3c8237be611446ea62596a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa33
7bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710543478088923910,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,},Annotations:map[string]string{io.kubernetes.container.hash: 667274d8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78,PodSandboxId:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710543462474927371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{io.kubernetes.container.hash: 506fb218,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23,PodSandboxId:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710543453910561060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,},Annotations:map[string]string{io.kubernetes.container.hash: 98b00e74,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3,PodSandboxId:5ef770e5867f87e50a902a6e9baaa2f8b75ab65acbe55931f3fa31caedb55e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710543452759732400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4140-a786-578f0feee2bd,},Annotations:map[string]string{io.kubernetes.container.hash: f8e87be9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05e642c2e04d68f8ddb17d589ac545b7
d6e6455cc4cbb87ea05be405497d75c,PodSandboxId:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710543433858697290,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9790ed80546000baa27b620fb3443e56,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8e1034ff660949fd2f7a
2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a,PodSandboxId:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710543433816571257,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,},Annotations:map[string]string{io.kubernetes.container.hash: 2f963277,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e,PodSa
ndboxId:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710543433886749379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939,PodSandboxId:d02d2c9bf
cd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710543433785318568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,},Annotations:map[string]string{io.kubernetes.container.hash: 771cd21a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0acabcd9-2c14-49d6-b682-85228982cce2 name=/runtime.v1.RuntimeService/
ListContainers
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.002040785Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1928dc31-c590-436e-81df-6391ca71abda name=/runtime.v1.RuntimeService/Version
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.002132903Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1928dc31-c590-436e-81df-6391ca71abda name=/runtime.v1.RuntimeService/Version
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.003781674Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5588808e-b537-44ac-b0dc-1f19025ba826 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.005361341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710543735005331433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5588808e-b537-44ac-b0dc-1f19025ba826 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.005920010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b38afcbd-3407-46ba-90ba-b8dfe164f557 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.006046461Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b38afcbd-3407-46ba-90ba-b8dfe164f557 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.006469059Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fe5bd57848870422f906e9eca78cfbc55e4e12b25137ec2441e0e76b891e1d4,PodSandboxId:8643f6191a0646b560a78dfc4451715425550a666941cb0fcf0406abc74132f9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710543727719776381,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jdpj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e96233cd-9cca-400f-b651-d1d222622ec7,},Annotations:map[string]string{io.kubernetes.container.hash: bda62444,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60fc446ea13582fce68669d615b3adb955e402de16d2c4c4d4bddf9ba080a23,PodSandboxId:88542fce8b77b6ecf892a5e5136d295c98e576c836c06035f4ed51f1795f7956,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710543586548167204,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a22cc079-e09c-4d9a-b112-aadd108e8149,},Annotations:map[string]string{io.kubern
etes.container.hash: e3f163ce,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e238683be105df71a217c901668c0c9a81f8fe11d6f7faf28d148898c64c6,PodSandboxId:302bffa2f4e98cf9162261bdbd3f463d4db07b68eedf3e0ebdd9720f56a46429,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710543576711480902,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-6q49r,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1830ee4b-4662-4561-b106-65e211f79e01,},Annotations:map[string]string{io.kubernetes.container.hash: 3c104721,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d,PodSandboxId:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710543538005671829,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,},Annotations:map[string]string{io.kubernetes.container.hash: a64da723,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf304fd980426d954707f8a64d861b997836692164cd559753666f089ad6d9eb,PodSandboxId:570bc8c2c75ef357c570d661fe907391800187b9debbbaefcbed4e12e35765e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710543515552600137,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f89sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fb6ef82-c69f-4938-9750-703821777bae,},Annotations:map[string]string{io.kubernetes.container.hash: 34101aa2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffc2a54ba4a4570cea0ffee96ae3627173ef2355c9f17ed39fc4252752c479c,PodSandboxId:c018dddd54c691afb0b93342568fa0dbe18ce5c7c3d77e0e1059032240ad3496,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710543512373947994,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tdwdz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 486fc903-f174-4206-91d6-a9744dcbfb23,},Annotations:map[string]string{io.kubernetes.container.hash: 2954f0a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e324c32f75a06a80a991cb9e626e87d6fc5ebb8a8bde9cecfc8534894d2a100,PodSandboxId:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710543503304620706,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,},Annotations:map[string]string{io.kubernetes.container.hash: e0e945f1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85,PodSandboxId:00183462ad3c8237be611446ea62596a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa33
7bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710543478088923910,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,},Annotations:map[string]string{io.kubernetes.container.hash: 667274d8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78,PodSandboxId:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710543462474927371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{io.kubernetes.container.hash: 506fb218,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23,PodSandboxId:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710543453910561060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,},Annotations:map[string]string{io.kubernetes.container.hash: 98b00e74,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3,PodSandboxId:5ef770e5867f87e50a902a6e9baaa2f8b75ab65acbe55931f3fa31caedb55e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710543452759732400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4140-a786-578f0feee2bd,},Annotations:map[string]string{io.kubernetes.container.hash: f8e87be9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05e642c2e04d68f8ddb17d589ac545b7
d6e6455cc4cbb87ea05be405497d75c,PodSandboxId:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710543433858697290,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9790ed80546000baa27b620fb3443e56,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8e1034ff660949fd2f7a
2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a,PodSandboxId:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710543433816571257,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,},Annotations:map[string]string{io.kubernetes.container.hash: 2f963277,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e,PodSa
ndboxId:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710543433886749379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939,PodSandboxId:d02d2c9bf
cd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710543433785318568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,},Annotations:map[string]string{io.kubernetes.container.hash: 771cd21a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b38afcbd-3407-46ba-90ba-b8dfe164f557 name=/runtime.v1.RuntimeService/
ListContainers
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.048794437Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=54f03e60-f55b-4b50-93ae-1ec85efbfda4 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.048870215Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=54f03e60-f55b-4b50-93ae-1ec85efbfda4 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.050621041Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9755adb7-3788-4e44-9591-c8fab409babf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.051899356Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710543735051869235,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9755adb7-3788-4e44-9591-c8fab409babf name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.052775717Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd51979d-daf1-41e4-9452-062c78cb8e21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.052854194Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd51979d-daf1-41e4-9452-062c78cb8e21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:02:15 addons-097314 crio[671]: time="2024-03-15 23:02:15.053285403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7fe5bd57848870422f906e9eca78cfbc55e4e12b25137ec2441e0e76b891e1d4,PodSandboxId:8643f6191a0646b560a78dfc4451715425550a666941cb0fcf0406abc74132f9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710543727719776381,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-jdpj2,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: e96233cd-9cca-400f-b651-d1d222622ec7,},Annotations:map[string]string{io.kubernetes.container.hash: bda62444,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f60fc446ea13582fce68669d615b3adb955e402de16d2c4c4d4bddf9ba080a23,PodSandboxId:88542fce8b77b6ecf892a5e5136d295c98e576c836c06035f4ed51f1795f7956,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710543586548167204,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a22cc079-e09c-4d9a-b112-aadd108e8149,},Annotations:map[string]string{io.kubern
etes.container.hash: e3f163ce,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:257e238683be105df71a217c901668c0c9a81f8fe11d6f7faf28d148898c64c6,PodSandboxId:302bffa2f4e98cf9162261bdbd3f463d4db07b68eedf3e0ebdd9720f56a46429,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710543576711480902,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-6q49r,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: 1830ee4b-4662-4561-b106-65e211f79e01,},Annotations:map[string]string{io.kubernetes.container.hash: 3c104721,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d,PodSandboxId:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710543538005671829,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,},Annotations:map[string]string{io.kubernetes.container.hash: a64da723,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf304fd980426d954707f8a64d861b997836692164cd559753666f089ad6d9eb,PodSandboxId:570bc8c2c75ef357c570d661fe907391800187b9debbbaefcbed4e12e35765e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710543515552600137,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f89sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fb6ef82-c69f-4938-9750-703821777bae,},Annotations:map[string]string{io.kubernetes.container.hash: 34101aa2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffc2a54ba4a4570cea0ffee96ae3627173ef2355c9f17ed39fc4252752c479c,PodSandboxId:c018dddd54c691afb0b93342568fa0dbe18ce5c7c3d77e0e1059032240ad3496,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710543512373947994,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tdwdz,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 486fc903-f174-4206-91d6-a9744dcbfb23,},Annotations:map[string]string{io.kubernetes.container.hash: 2954f0a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e324c32f75a06a80a991cb9e626e87d6fc5ebb8a8bde9cecfc8534894d2a100,PodSandboxId:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710543503304620706,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,},Annotations:map[string]string{io.kubernetes.container.hash: e0e945f1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85,PodSandboxId:00183462ad3c8237be611446ea62596a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa33
7bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710543478088923910,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,},Annotations:map[string]string{io.kubernetes.container.hash: 667274d8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78,PodSandboxId:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&ContainerMetadata{Name:storage
-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710543462474927371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{io.kubernetes.container.hash: 506fb218,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23,PodSandboxId:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,}
,Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710543453910561060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,},Annotations:map[string]string{io.kubernetes.container.hash: 98b00e74,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:
bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3,PodSandboxId:5ef770e5867f87e50a902a6e9baaa2f8b75ab65acbe55931f3fa31caedb55e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710543452759732400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4140-a786-578f0feee2bd,},Annotations:map[string]string{io.kubernetes.container.hash: f8e87be9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05e642c2e04d68f8ddb17d589ac545b7
d6e6455cc4cbb87ea05be405497d75c,PodSandboxId:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710543433858697290,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9790ed80546000baa27b620fb3443e56,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8e1034ff660949fd2f7a
2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a,PodSandboxId:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710543433816571257,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,},Annotations:map[string]string{io.kubernetes.container.hash: 2f963277,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e,PodSa
ndboxId:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710543433886749379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939,PodSandboxId:d02d2c9bf
cd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710543433785318568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,},Annotations:map[string]string{io.kubernetes.container.hash: 771cd21a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd51979d-daf1-41e4-9452-062c78cb8e21 name=/runtime.v1.RuntimeService/
ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7fe5bd5784887       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      7 seconds ago       Running             hello-world-app           0                   8643f6191a064       hello-world-app-5d77478584-jdpj2
	f60fc446ea135       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   88542fce8b77b       nginx
	257e238683be1       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   302bffa2f4e98       headlamp-5485c556b-6q49r
	24e773cc47dab       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   83ab6e3801aab       gcp-auth-7d69788767-l2z4d
	bf304fd980426       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   570bc8c2c75ef       ingress-nginx-admission-patch-f89sv
	4ffc2a54ba4a4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   c018dddd54c69       ingress-nginx-admission-create-tdwdz
	1e324c32f75a0       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   284fbccc261c7       yakd-dashboard-9947fc6bf-rb286
	c6f5f4c882217       registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca        4 minutes ago       Running             metrics-server            0                   00183462ad3c8       metrics-server-69cf46c98-spvr4
	29ada771117de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   f2e2e22b8b41d       storage-provisioner
	ecce4d992fb31       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   f057691998e9a       coredns-5dd5756b68-p6s6d
	bb754aa5ded80       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   5ef770e5867f8       kube-proxy-zspm2
	659078f5add23       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             5 minutes ago       Running             kube-scheduler            0                   4e5a96c86a833       kube-scheduler-addons-097314
	c05e642c2e04d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             5 minutes ago       Running             kube-controller-manager   0                   3f955150cb63f       kube-controller-manager-addons-097314
	3ed8e1034ff66       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             5 minutes ago       Running             etcd                      0                   18e7af9df8f8d       etcd-addons-097314
	4403cafbe1aff       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             5 minutes ago       Running             kube-apiserver            0                   d02d2c9bfcd71       kube-apiserver-addons-097314
	
	
	==> coredns [ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43283 - 17841 "HINFO IN 6454626148988223792.8343432799215500319. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021021614s
	[INFO] 10.244.0.22:38076 - 49108 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000504131s
	[INFO] 10.244.0.22:50309 - 56871 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177763s
	[INFO] 10.244.0.22:47752 - 29698 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012988s
	[INFO] 10.244.0.22:44958 - 57465 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102932s
	[INFO] 10.244.0.22:58986 - 25833 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110912s
	[INFO] 10.244.0.22:34448 - 55463 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000186864s
	[INFO] 10.244.0.22:60633 - 3662 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003807109s
	[INFO] 10.244.0.22:53624 - 4084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.004127163s
	[INFO] 10.244.0.25:37505 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000464611s
	[INFO] 10.244.0.25:46065 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130821s
	
	
	==> describe nodes <==
	Name:               addons-097314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-097314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=addons-097314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T22_57_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-097314
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 22:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-097314
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:02:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 22:59:53 +0000   Fri, 15 Mar 2024 22:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 22:59:53 +0000   Fri, 15 Mar 2024 22:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 22:59:53 +0000   Fri, 15 Mar 2024 22:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 22:59:53 +0000   Fri, 15 Mar 2024 22:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    addons-097314
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 b25183c5e3414df78f950fb09dc6c38c
	  System UUID:                b25183c5-e341-4df7-8f95-0fb09dc6c38c
	  Boot ID:                    4fb98055-0700-4a85-9eae-f3b0ed873bc7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-jdpj2         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m37s
	  gcp-auth                    gcp-auth-7d69788767-l2z4d                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m30s
	  headlamp                    headlamp-5485c556b-6q49r                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m46s
	  kube-system                 coredns-5dd5756b68-p6s6d                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m43s
	  kube-system                 etcd-addons-097314                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-apiserver-addons-097314             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-controller-manager-addons-097314    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-proxy-zspm2                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m44s
	  kube-system                 kube-scheduler-addons-097314             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 metrics-server-69cf46c98-spvr4           100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         4m37s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m36s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-rb286           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             498Mi (13%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m39s  kube-proxy       
	  Normal  Starting                 4m56s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m56s  kubelet          Node addons-097314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m56s  kubelet          Node addons-097314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m56s  kubelet          Node addons-097314 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m56s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m56s  kubelet          Node addons-097314 status is now: NodeReady
	  Normal  RegisteredNode           4m44s  node-controller  Node addons-097314 event: Registered Node addons-097314 in Controller
	
	
	==> dmesg <==
	[  +0.084713] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.344667] systemd-fstab-generator[1485]: Ignoring "noauto" option for root device
	[  +0.046296] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003648] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.068077] kauditd_printk_skb: 105 callbacks suppressed
	[  +8.690714] kauditd_printk_skb: 98 callbacks suppressed
	[Mar15 22:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.621383] kauditd_printk_skb: 1 callbacks suppressed
	[ +24.457815] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.206184] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.274100] kauditd_printk_skb: 66 callbacks suppressed
	[  +6.867087] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.770499] kauditd_printk_skb: 11 callbacks suppressed
	[Mar15 22:59] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.145272] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.683918] kauditd_printk_skb: 47 callbacks suppressed
	[  +5.503019] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.051420] kauditd_printk_skb: 29 callbacks suppressed
	[  +7.243889] kauditd_printk_skb: 54 callbacks suppressed
	[  +7.816155] kauditd_printk_skb: 15 callbacks suppressed
	[  +9.624992] kauditd_printk_skb: 8 callbacks suppressed
	[Mar15 23:00] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.428101] kauditd_printk_skb: 25 callbacks suppressed
	[Mar15 23:02] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.056302] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [3ed8e1034ff660949fd2f7a2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a] <==
	{"level":"info","ts":"2024-03-15T22:59:18.91112Z","caller":"traceutil/trace.go:171","msg":"trace[504965360] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1344; }","duration":"420.885727ms","start":"2024-03-15T22:59:18.490222Z","end":"2024-03-15T22:59:18.911107Z","steps":["trace[504965360] 'agreement among raft nodes before linearized reading'  (duration: 419.288579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:18.911202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:18.490199Z","time spent":"420.983995ms","remote":"127.0.0.1:34488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	{"level":"warn","ts":"2024-03-15T22:59:35.690069Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.504466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3747"}
	{"level":"info","ts":"2024-03-15T22:59:35.690237Z","caller":"traceutil/trace.go:171","msg":"trace[1067257451] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1472; }","duration":"322.761996ms","start":"2024-03-15T22:59:35.367456Z","end":"2024-03-15T22:59:35.690218Z","steps":["trace[1067257451] 'range keys from in-memory index tree'  (duration: 322.402035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:35.690279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:35.367443Z","time spent":"322.825331ms","remote":"127.0.0.1:34446","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3769,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"warn","ts":"2024-03-15T22:59:35.690189Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.138286ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:554"}
	{"level":"info","ts":"2024-03-15T22:59:35.690379Z","caller":"traceutil/trace.go:171","msg":"trace[568159544] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1472; }","duration":"112.335108ms","start":"2024-03-15T22:59:35.578034Z","end":"2024-03-15T22:59:35.690369Z","steps":["trace[568159544] 'range keys from in-memory index tree'  (duration: 112.060437ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:35.690499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"434.851441ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2965"}
	{"level":"info","ts":"2024-03-15T22:59:35.690562Z","caller":"traceutil/trace.go:171","msg":"trace[1351427339] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1472; }","duration":"434.916618ms","start":"2024-03-15T22:59:35.255635Z","end":"2024-03-15T22:59:35.690552Z","steps":["trace[1351427339] 'range keys from in-memory index tree'  (duration: 434.770658ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:35.690597Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:35.255619Z","time spent":"434.970021ms","remote":"127.0.0.1:34446","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":2987,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"info","ts":"2024-03-15T22:59:40.7647Z","caller":"traceutil/trace.go:171","msg":"trace[1041875824] transaction","detail":"{read_only:false; response_revision:1519; number_of_response:1; }","duration":"405.019648ms","start":"2024-03-15T22:59:40.359653Z","end":"2024-03-15T22:59:40.764673Z","steps":["trace[1041875824] 'process raft request'  (duration: 404.92306ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:40.764822Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:40.359638Z","time spent":"405.108435ms","remote":"127.0.0.1:34434","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1515 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-15T22:59:40.76523Z","caller":"traceutil/trace.go:171","msg":"trace[678564000] linearizableReadLoop","detail":"{readStateIndex:1575; appliedIndex:1575; }","duration":"397.990708ms","start":"2024-03-15T22:59:40.367229Z","end":"2024-03-15T22:59:40.76522Z","steps":["trace[678564000] 'read index received'  (duration: 397.988111ms)","trace[678564000] 'applied index is now lower than readState.Index'  (duration: 2.056µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T22:59:40.765321Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.515765ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/ingress-nginx/ingress-nginx-leader\" ","response":"range_response_count:1 size:500"}
	{"level":"info","ts":"2024-03-15T22:59:40.765339Z","caller":"traceutil/trace.go:171","msg":"trace[1829875495] range","detail":"{range_begin:/registry/leases/ingress-nginx/ingress-nginx-leader; range_end:; response_count:1; response_revision:1519; }","duration":"238.542224ms","start":"2024-03-15T22:59:40.526792Z","end":"2024-03-15T22:59:40.765334Z","steps":["trace[1829875495] 'agreement among raft nodes before linearized reading'  (duration: 238.487546ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:40.765467Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"398.253982ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" ","response":"range_response_count:1 size:3804"}
	{"level":"info","ts":"2024-03-15T22:59:40.76548Z","caller":"traceutil/trace.go:171","msg":"trace[2007559369] range","detail":"{range_begin:/registry/pods/headlamp/; range_end:/registry/pods/headlamp0; response_count:1; response_revision:1519; }","duration":"398.269558ms","start":"2024-03-15T22:59:40.367207Z","end":"2024-03-15T22:59:40.765476Z","steps":["trace[2007559369] 'agreement among raft nodes before linearized reading'  (duration: 398.236823ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:40.765496Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:40.367194Z","time spent":"398.29753ms","remote":"127.0.0.1:34446","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":3826,"request content":"key:\"/registry/pods/headlamp/\" range_end:\"/registry/pods/headlamp0\" "}
	{"level":"warn","ts":"2024-03-15T22:59:40.765638Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.557818ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/ingress-nginx/ingress-nginx-controller-76dc478dd8-ql9pj.17bd12485d46301b\" ","response":"range_response_count:1 size:797"}
	{"level":"info","ts":"2024-03-15T22:59:40.765656Z","caller":"traceutil/trace.go:171","msg":"trace[96196417] range","detail":"{range_begin:/registry/events/ingress-nginx/ingress-nginx-controller-76dc478dd8-ql9pj.17bd12485d46301b; range_end:; response_count:1; response_revision:1519; }","duration":"178.573041ms","start":"2024-03-15T22:59:40.587075Z","end":"2024-03-15T22:59:40.765648Z","steps":["trace[96196417] 'agreement among raft nodes before linearized reading'  (duration: 178.542469ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T22:59:42.928455Z","caller":"traceutil/trace.go:171","msg":"trace[409756354] transaction","detail":"{read_only:false; response_revision:1523; number_of_response:1; }","duration":"412.115355ms","start":"2024-03-15T22:59:42.516322Z","end":"2024-03-15T22:59:42.928437Z","steps":["trace[409756354] 'process raft request'  (duration: 411.847653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:42.928584Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:42.516305Z","time spent":"412.212707ms","remote":"127.0.0.1:34518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":541,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/addons-097314\" mod_revision:1452 > success:<request_put:<key:\"/registry/leases/kube-node-lease/addons-097314\" value_size:487 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/addons-097314\" > >"}
	{"level":"info","ts":"2024-03-15T22:59:42.928882Z","caller":"traceutil/trace.go:171","msg":"trace[1706848635] linearizableReadLoop","detail":"{readStateIndex:1579; appliedIndex:1579; }","duration":"144.215939ms","start":"2024-03-15T22:59:42.784653Z","end":"2024-03-15T22:59:42.928869Z","steps":["trace[1706848635] 'read index received'  (duration: 144.209994ms)","trace[1706848635] 'applied index is now lower than readState.Index'  (duration: 4.941µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T22:59:42.929568Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.942996ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-15T22:59:42.929635Z","caller":"traceutil/trace.go:171","msg":"trace[692392096] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1523; }","duration":"145.015535ms","start":"2024-03-15T22:59:42.78461Z","end":"2024-03-15T22:59:42.929626Z","steps":["trace[692392096] 'agreement among raft nodes before linearized reading'  (duration: 144.357637ms)"],"step_count":1}
	
	
	==> gcp-auth [24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d] <==
	2024/03/15 22:58:59 Ready to write response ...
	2024/03/15 22:58:59 Ready to marshal response ...
	2024/03/15 22:58:59 Ready to write response ...
	2024/03/15 22:59:09 Ready to marshal response ...
	2024/03/15 22:59:09 Ready to write response ...
	2024/03/15 22:59:10 Ready to marshal response ...
	2024/03/15 22:59:10 Ready to write response ...
	2024/03/15 22:59:17 Ready to marshal response ...
	2024/03/15 22:59:17 Ready to write response ...
	2024/03/15 22:59:23 Ready to marshal response ...
	2024/03/15 22:59:23 Ready to write response ...
	2024/03/15 22:59:29 Ready to marshal response ...
	2024/03/15 22:59:29 Ready to write response ...
	2024/03/15 22:59:29 Ready to marshal response ...
	2024/03/15 22:59:29 Ready to write response ...
	2024/03/15 22:59:29 Ready to marshal response ...
	2024/03/15 22:59:29 Ready to write response ...
	2024/03/15 22:59:30 Ready to marshal response ...
	2024/03/15 22:59:30 Ready to write response ...
	2024/03/15 22:59:38 Ready to marshal response ...
	2024/03/15 22:59:38 Ready to write response ...
	2024/03/15 23:00:05 Ready to marshal response ...
	2024/03/15 23:00:05 Ready to write response ...
	2024/03/15 23:02:04 Ready to marshal response ...
	2024/03/15 23:02:04 Ready to write response ...
	
	
	==> kernel <==
	 23:02:15 up 5 min,  0 users,  load average: 1.18, 1.28, 0.66
	Linux addons-097314 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939] <==
	I0315 22:59:38.242911       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.102.48.191"}
	I0315 22:59:51.320618       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0315 23:00:16.087326       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.299670       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.299739       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.311776       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.312861       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.324321       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.324353       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.337504       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.337569       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.348565       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.348637       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.362285       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.362399       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.367932       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.368651       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0315 23:00:24.384204       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0315 23:00:24.384291       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0315 23:00:25.338950       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0315 23:00:25.384738       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0315 23:00:25.400710       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0315 23:01:16.084662       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 23:02:05.167358       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.105.221.205"}
	E0315 23:02:06.237920       1 watch.go:287] unable to encode watch object *v1.WatchEvent: http2: stream closed (&streaming.encoderWithAllocator{writer:responsewriter.outerWithCloseNotifyAndFlush{UserProvidedDecorator:(*metrics.ResponseWriterDelegator)(0xc005a55020), InnerCloseNotifierFlusher:struct { httpsnoop.Unwrapper; http.ResponseWriter; http.Flusher; http.CloseNotifier; http.Pusher }{Unwrapper:(*httpsnoop.rw)(0xc010ac5400), ResponseWriter:(*httpsnoop.rw)(0xc010ac5400), Flusher:(*httpsnoop.rw)(0xc010ac5400), CloseNotifier:(*httpsnoop.rw)(0xc010ac5400), Pusher:(*httpsnoop.rw)(0xc010ac5400)}}, encoder:(*versioning.codec)(0xc0033d3cc0), memAllocator:(*runtime.Allocator)(0xc00413ade0)})
	
	
	==> kube-controller-manager [c05e642c2e04d68f8ddb17d589ac545b7d6e6455cc4cbb87ea05be405497d75c] <==
	E0315 23:00:59.852252       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 23:01:04.623539       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:01:04.623590       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 23:01:04.871608       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:01:04.871635       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 23:01:25.739960       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:01:25.740188       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 23:01:40.651956       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:01:40.652060       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 23:01:51.960451       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:01:51.960596       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0315 23:01:54.632703       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:01:54.632828       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0315 23:02:04.965692       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0315 23:02:05.003719       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-jdpj2"
	I0315 23:02:05.022469       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.778089ms"
	I0315 23:02:05.064636       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.800442ms"
	I0315 23:02:05.064813       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.093µs"
	I0315 23:02:07.119031       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0315 23:02:07.133109       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="4.61µs"
	I0315 23:02:07.143664       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0315 23:02:08.239778       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="7.721428ms"
	I0315 23:02:08.240405       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="124.828µs"
	W0315 23:02:08.948783       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0315 23:02:08.948954       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3] <==
	I0315 22:57:33.982162       1 server_others.go:69] "Using iptables proxy"
	I0315 22:57:34.262608       1 node.go:141] Successfully retrieved node IP: 192.168.39.35
	I0315 22:57:35.766044       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 22:57:35.766089       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 22:57:35.769569       1 server_others.go:152] "Using iptables Proxier"
	I0315 22:57:35.769631       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 22:57:35.769789       1 server.go:846] "Version info" version="v1.28.4"
	I0315 22:57:35.769822       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 22:57:35.771078       1 config.go:188] "Starting service config controller"
	I0315 22:57:35.771127       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 22:57:35.771161       1 config.go:97] "Starting endpoint slice config controller"
	I0315 22:57:35.771165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 22:57:35.771523       1 config.go:315] "Starting node config controller"
	I0315 22:57:35.771529       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 22:57:35.872158       1 shared_informer.go:318] Caches are synced for node config
	I0315 22:57:35.872207       1 shared_informer.go:318] Caches are synced for service config
	I0315 22:57:35.872235       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e] <==
	W0315 22:57:16.237627       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 22:57:16.237658       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 22:57:17.038272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 22:57:17.038301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 22:57:17.130127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 22:57:17.130261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 22:57:17.142694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 22:57:17.142766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 22:57:17.154634       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 22:57:17.154808       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 22:57:17.183940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 22:57:17.184048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 22:57:17.215692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 22:57:17.215795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 22:57:17.220792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 22:57:17.220903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 22:57:17.240606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 22:57:17.240698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 22:57:17.306266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 22:57:17.306310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 22:57:17.444575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 22:57:17.444638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 22:57:17.452059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 22:57:17.452101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0315 22:57:19.714419       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 23:02:05 addons-097314 kubelet[1261]: I0315 23:02:05.026434    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="a08ba8ff-3283-4356-a495-7ebfd59456b6" containerName="csi-resizer"
	Mar 15 23:02:05 addons-097314 kubelet[1261]: I0315 23:02:05.026469    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="0d796667-8f3f-4044-bb24-25cd1713ebc2" containerName="volume-snapshot-controller"
	Mar 15 23:02:05 addons-097314 kubelet[1261]: I0315 23:02:05.151606    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e96233cd-9cca-400f-b651-d1d222622ec7-gcp-creds\") pod \"hello-world-app-5d77478584-jdpj2\" (UID: \"e96233cd-9cca-400f-b651-d1d222622ec7\") " pod="default/hello-world-app-5d77478584-jdpj2"
	Mar 15 23:02:05 addons-097314 kubelet[1261]: I0315 23:02:05.152070    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tq4w\" (UniqueName: \"kubernetes.io/projected/e96233cd-9cca-400f-b651-d1d222622ec7-kube-api-access-9tq4w\") pod \"hello-world-app-5d77478584-jdpj2\" (UID: \"e96233cd-9cca-400f-b651-d1d222622ec7\") " pod="default/hello-world-app-5d77478584-jdpj2"
	Mar 15 23:02:06 addons-097314 kubelet[1261]: I0315 23:02:06.161141    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtq2\" (UniqueName: \"kubernetes.io/projected/beb46bcd-db3c-4022-9b99-e6a29dbf5543-kube-api-access-nqtq2\") pod \"beb46bcd-db3c-4022-9b99-e6a29dbf5543\" (UID: \"beb46bcd-db3c-4022-9b99-e6a29dbf5543\") "
	Mar 15 23:02:06 addons-097314 kubelet[1261]: I0315 23:02:06.163437    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beb46bcd-db3c-4022-9b99-e6a29dbf5543-kube-api-access-nqtq2" (OuterVolumeSpecName: "kube-api-access-nqtq2") pod "beb46bcd-db3c-4022-9b99-e6a29dbf5543" (UID: "beb46bcd-db3c-4022-9b99-e6a29dbf5543"). InnerVolumeSpecName "kube-api-access-nqtq2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 15 23:02:06 addons-097314 kubelet[1261]: I0315 23:02:06.198304    1261 scope.go:117] "RemoveContainer" containerID="b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91"
	Mar 15 23:02:06 addons-097314 kubelet[1261]: I0315 23:02:06.239475    1261 scope.go:117] "RemoveContainer" containerID="b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91"
	Mar 15 23:02:06 addons-097314 kubelet[1261]: E0315 23:02:06.241728    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91\": container with ID starting with b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91 not found: ID does not exist" containerID="b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91"
	Mar 15 23:02:06 addons-097314 kubelet[1261]: I0315 23:02:06.241799    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91"} err="failed to get container status \"b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91\": rpc error: code = NotFound desc = could not find container \"b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91\": container with ID starting with b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91 not found: ID does not exist"
	Mar 15 23:02:06 addons-097314 kubelet[1261]: I0315 23:02:06.262455    1261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nqtq2\" (UniqueName: \"kubernetes.io/projected/beb46bcd-db3c-4022-9b99-e6a29dbf5543-kube-api-access-nqtq2\") on node \"addons-097314\" DevicePath \"\""
	Mar 15 23:02:07 addons-097314 kubelet[1261]: I0315 23:02:07.251685    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="486fc903-f174-4206-91d6-a9744dcbfb23" path="/var/lib/kubelet/pods/486fc903-f174-4206-91d6-a9744dcbfb23/volumes"
	Mar 15 23:02:07 addons-097314 kubelet[1261]: I0315 23:02:07.252684    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8fb6ef82-c69f-4938-9750-703821777bae" path="/var/lib/kubelet/pods/8fb6ef82-c69f-4938-9750-703821777bae/volumes"
	Mar 15 23:02:07 addons-097314 kubelet[1261]: I0315 23:02:07.253215    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="beb46bcd-db3c-4022-9b99-e6a29dbf5543" path="/var/lib/kubelet/pods/beb46bcd-db3c-4022-9b99-e6a29dbf5543/volumes"
	Mar 15 23:02:10 addons-097314 kubelet[1261]: I0315 23:02:10.498579    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6cz9\" (UniqueName: \"kubernetes.io/projected/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5-kube-api-access-g6cz9\") pod \"361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5\" (UID: \"361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5\") "
	Mar 15 23:02:10 addons-097314 kubelet[1261]: I0315 23:02:10.498670    1261 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5-webhook-cert\") pod \"361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5\" (UID: \"361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5\") "
	Mar 15 23:02:10 addons-097314 kubelet[1261]: I0315 23:02:10.502236    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5" (UID: "361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 15 23:02:10 addons-097314 kubelet[1261]: I0315 23:02:10.504215    1261 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5-kube-api-access-g6cz9" (OuterVolumeSpecName: "kube-api-access-g6cz9") pod "361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5" (UID: "361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5"). InnerVolumeSpecName "kube-api-access-g6cz9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 15 23:02:10 addons-097314 kubelet[1261]: I0315 23:02:10.599049    1261 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-g6cz9\" (UniqueName: \"kubernetes.io/projected/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5-kube-api-access-g6cz9\") on node \"addons-097314\" DevicePath \"\""
	Mar 15 23:02:10 addons-097314 kubelet[1261]: I0315 23:02:10.599099    1261 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5-webhook-cert\") on node \"addons-097314\" DevicePath \"\""
	Mar 15 23:02:11 addons-097314 kubelet[1261]: I0315 23:02:11.236761    1261 scope.go:117] "RemoveContainer" containerID="19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5"
	Mar 15 23:02:11 addons-097314 kubelet[1261]: I0315 23:02:11.278654    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5" path="/var/lib/kubelet/pods/361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5/volumes"
	Mar 15 23:02:11 addons-097314 kubelet[1261]: I0315 23:02:11.279469    1261 scope.go:117] "RemoveContainer" containerID="19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5"
	Mar 15 23:02:11 addons-097314 kubelet[1261]: E0315 23:02:11.280431    1261 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5\": container with ID starting with 19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5 not found: ID does not exist" containerID="19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5"
	Mar 15 23:02:11 addons-097314 kubelet[1261]: I0315 23:02:11.280474    1261 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5"} err="failed to get container status \"19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5\": rpc error: code = NotFound desc = could not find container \"19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5\": container with ID starting with 19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5 not found: ID does not exist"
	
	
	==> storage-provisioner [29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78] <==
	I0315 22:57:43.890334       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 22:57:43.958136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 22:57:43.958229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 22:57:43.975689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 22:57:43.976208       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-097314_fbbaef0b-1223-402a-a6cc-cd3bdb8ba88c!
	I0315 22:57:43.981273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7c377ec-d65e-487d-baf9-e123acc43c72", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-097314_fbbaef0b-1223-402a-a6cc-cd3bdb8ba88c became leader
	I0315 22:57:44.077434       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-097314_fbbaef0b-1223-402a-a6cc-cd3bdb8ba88c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097314 -n addons-097314
helpers_test.go:261: (dbg) Run:  kubectl --context addons-097314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (158.40s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (8.5s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 4.101499ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-spvr4" [673c996b-9f13-4f55-a0da-458b3f9d201d] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.139668659s
addons_test.go:415: (dbg) Run:  kubectl --context addons-097314 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-097314 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (464.433526ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 22:59:19.039924   84920 out.go:291] Setting OutFile to fd 1 ...
	I0315 22:59:19.040044   84920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:59:19.040054   84920 out.go:304] Setting ErrFile to fd 2...
	I0315 22:59:19.040059   84920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:59:19.040250   84920 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 22:59:19.040524   84920 mustload.go:65] Loading cluster: addons-097314
	I0315 22:59:19.040856   84920 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:59:19.040880   84920 addons.go:597] checking whether the cluster is paused
	I0315 22:59:19.040963   84920 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:59:19.040975   84920 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:59:19.041332   84920 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:59:19.041372   84920 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:59:19.056083   84920 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40559
	I0315 22:59:19.056639   84920 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:59:19.057207   84920 main.go:141] libmachine: Using API Version  1
	I0315 22:59:19.057230   84920 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:59:19.057657   84920 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:59:19.057907   84920 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:59:19.059664   84920 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:59:19.059895   84920 ssh_runner.go:195] Run: systemctl --version
	I0315 22:59:19.059917   84920 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:59:19.062480   84920 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:59:19.062875   84920 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:59:19.062911   84920 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:59:19.063026   84920 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:59:19.063230   84920 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:59:19.063451   84920 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:59:19.063643   84920 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:59:19.149510   84920 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 22:59:19.149576   84920 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 22:59:19.257545   84920 cri.go:89] found id: "4786fa6f76feb0e4695c3767f2d295550c887027806d4e640b1ed2c33852dcf6"
	I0315 22:59:19.257574   84920 cri.go:89] found id: "06f8bb05f3b3a57fd0fa5e2d17e260b81906a2efe2e7cf17e6301cfed1328a23"
	I0315 22:59:19.257589   84920 cri.go:89] found id: "cf3826f24a4323581b8ad3132e05e89901b91df5312be3314c170b31bc98edf4"
	I0315 22:59:19.257595   84920 cri.go:89] found id: "11c825863e6ffd1c62a1bca156e971d19e235efde104336ba3d9b05b8b479bb5"
	I0315 22:59:19.257600   84920 cri.go:89] found id: "655c51734c4819649525feff7a6fa21900cee788c2bb050fc5750f78302d8675"
	I0315 22:59:19.257604   84920 cri.go:89] found id: "c1175da4d0dd405e88935f02d8f4debe8afbb09b7b074c429a0bfe0c76e26ffd"
	I0315 22:59:19.257612   84920 cri.go:89] found id: "1da0b76fe97da4482bb4d7523b797b96c494d64cf64033a99db52310b744fc51"
	I0315 22:59:19.257616   84920 cri.go:89] found id: "e2ccb85347428904b0d886921c99948947d95eda789a6d68969002906fadff6f"
	I0315 22:59:19.257620   84920 cri.go:89] found id: "46a38da70685745709b0a8add051c14a539203fec616b87637676631ddb9d141"
	I0315 22:59:19.257630   84920 cri.go:89] found id: "2d33b8f4de434d9d48ca9c230669338c8e02e74d17e90c51e7a5cfb18e57876f"
	I0315 22:59:19.257635   84920 cri.go:89] found id: "b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91"
	I0315 22:59:19.257639   84920 cri.go:89] found id: "1154b624dfb6077d08bbb09a92bf82a501f0afeafbbbda30d4361796faf7594a"
	I0315 22:59:19.257647   84920 cri.go:89] found id: "c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85"
	I0315 22:59:19.257651   84920 cri.go:89] found id: "29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78"
	I0315 22:59:19.257658   84920 cri.go:89] found id: "ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23"
	I0315 22:59:19.257667   84920 cri.go:89] found id: "bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3"
	I0315 22:59:19.257672   84920 cri.go:89] found id: "659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e"
	I0315 22:59:19.257678   84920 cri.go:89] found id: "c05e642c2e04d68f8ddb17d589ac545b7d6e6455cc4cbb87ea05be405497d75c"
	I0315 22:59:19.257683   84920 cri.go:89] found id: "3ed8e1034ff660949fd2f7a2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a"
	I0315 22:59:19.257688   84920 cri.go:89] found id: "4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939"
	I0315 22:59:19.257692   84920 cri.go:89] found id: ""
	I0315 22:59:19.257747   84920 ssh_runner.go:195] Run: sudo runc list -f json
	I0315 22:59:19.440391   84920 main.go:141] libmachine: Making call to close driver server
	I0315 22:59:19.440421   84920 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:59:19.440732   84920 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:59:19.440752   84920 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:59:19.443231   84920 out.go:177] 
	W0315 22:59:19.444631   84920 out.go:239] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-15T22:59:19Z" level=error msg="stat /run/runc/b72590f9af184ebf01e1b175e26d782723b8121d7ff05531b672efedcea9509a: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2024-03-15T22:59:19Z" level=error msg="stat /run/runc/b72590f9af184ebf01e1b175e26d782723b8121d7ff05531b672efedcea9509a: no such file or directory"
	
	W0315 22:59:19.444645   84920 out.go:239] * 
	* 
	W0315 22:59:19.447305   84920 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 22:59:19.448686   84920 out.go:177] 

                                                
                                                
** /stderr **
addons_test.go:434: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-097314 addons disable metrics-server --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-097314 -n addons-097314
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-097314 logs -n 25: (1.911994884s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-255255                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-255255                                                                     | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | -o=json --download-only                                                                     | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-465986                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                                                                |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-465986                                                                     | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | -o=json --download-only                                                                     | download-only-546206 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-546206                                                                     |                      |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-546206                                                                     | download-only-546206 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-255255                                                                     | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-465986                                                                     | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-546206                                                                     | download-only-546206 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-349079 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | binary-mirror-349079                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:37207                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-349079                                                                     | binary-mirror-349079 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| addons  | enable dashboard -p                                                                         | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | addons-097314                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | addons-097314                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-097314 --wait=true                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:58 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-097314 ssh cat                                                                       | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | /opt/local-path-provisioner/pvc-c163d35d-fa3b-40ab-b865-3fb0f205250a_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | -p addons-097314                                                                            |                      |         |         |                     |                     |
	| ip      | addons-097314 ip                                                                            | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	| addons  | addons-097314 addons disable                                                                | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC | 15 Mar 24 22:59 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-097314 addons                                                                        | addons-097314        | jenkins | v1.32.0 | 15 Mar 24 22:59 UTC |                     |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 22:56:35
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 22:56:35.846979   83607 out.go:291] Setting OutFile to fd 1 ...
	I0315 22:56:35.847131   83607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:35.847142   83607 out.go:304] Setting ErrFile to fd 2...
	I0315 22:56:35.847146   83607 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:35.847378   83607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 22:56:35.848119   83607 out.go:298] Setting JSON to false
	I0315 22:56:35.848997   83607 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5946,"bootTime":1710537450,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 22:56:35.849063   83607 start.go:139] virtualization: kvm guest
	I0315 22:56:35.851568   83607 out.go:177] * [addons-097314] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 22:56:35.853041   83607 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 22:56:35.853179   83607 notify.go:220] Checking for updates...
	I0315 22:56:35.854546   83607 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 22:56:35.856054   83607 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:56:35.857407   83607 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:35.858762   83607 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 22:56:35.860066   83607 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 22:56:35.861529   83607 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 22:56:35.893553   83607 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 22:56:35.895003   83607 start.go:297] selected driver: kvm2
	I0315 22:56:35.895023   83607 start.go:901] validating driver "kvm2" against <nil>
	I0315 22:56:35.895034   83607 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 22:56:35.895747   83607 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:35.895811   83607 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 22:56:35.910357   83607 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 22:56:35.910406   83607 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 22:56:35.910625   83607 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 22:56:35.910683   83607 cni.go:84] Creating CNI manager for ""
	I0315 22:56:35.910695   83607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:56:35.910702   83607 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 22:56:35.910774   83607 start.go:340] cluster config:
	{Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 22:56:35.910902   83607 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:35.912648   83607 out.go:177] * Starting "addons-097314" primary control-plane node in "addons-097314" cluster
	I0315 22:56:35.913856   83607 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 22:56:35.913886   83607 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 22:56:35.913893   83607 cache.go:56] Caching tarball of preloaded images
	I0315 22:56:35.913961   83607 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 22:56:35.913971   83607 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 22:56:35.914255   83607 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/config.json ...
	I0315 22:56:35.914275   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/config.json: {Name:mk9a389d40bfd20da607554ee69b85887d211b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:56:35.914406   83607 start.go:360] acquireMachinesLock for addons-097314: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 22:56:35.914446   83607 start.go:364] duration metric: took 27.181µs to acquireMachinesLock for "addons-097314"
	I0315 22:56:35.914463   83607 start.go:93] Provisioning new machine with config: &{Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 22:56:35.914537   83607 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 22:56:35.916140   83607 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0315 22:56:35.916256   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:56:35.916290   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:56:35.930193   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40653
	I0315 22:56:35.930625   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:56:35.931170   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:56:35.931196   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:56:35.931980   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:56:35.932969   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:56:35.933163   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:56:35.933309   83607 start.go:159] libmachine.API.Create for "addons-097314" (driver="kvm2")
	I0315 22:56:35.933336   83607 client.go:168] LocalClient.Create starting
	I0315 22:56:35.933371   83607 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 22:56:36.055123   83607 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 22:56:36.219582   83607 main.go:141] libmachine: Running pre-create checks...
	I0315 22:56:36.219608   83607 main.go:141] libmachine: (addons-097314) Calling .PreCreateCheck
	I0315 22:56:36.220161   83607 main.go:141] libmachine: (addons-097314) Calling .GetConfigRaw
	I0315 22:56:36.220629   83607 main.go:141] libmachine: Creating machine...
	I0315 22:56:36.220646   83607 main.go:141] libmachine: (addons-097314) Calling .Create
	I0315 22:56:36.220811   83607 main.go:141] libmachine: (addons-097314) Creating KVM machine...
	I0315 22:56:36.222018   83607 main.go:141] libmachine: (addons-097314) DBG | found existing default KVM network
	I0315 22:56:36.222702   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.222573   83629 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0315 22:56:36.222732   83607 main.go:141] libmachine: (addons-097314) DBG | created network xml: 
	I0315 22:56:36.222743   83607 main.go:141] libmachine: (addons-097314) DBG | <network>
	I0315 22:56:36.222751   83607 main.go:141] libmachine: (addons-097314) DBG |   <name>mk-addons-097314</name>
	I0315 22:56:36.222760   83607 main.go:141] libmachine: (addons-097314) DBG |   <dns enable='no'/>
	I0315 22:56:36.222769   83607 main.go:141] libmachine: (addons-097314) DBG |   
	I0315 22:56:36.222779   83607 main.go:141] libmachine: (addons-097314) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 22:56:36.222790   83607 main.go:141] libmachine: (addons-097314) DBG |     <dhcp>
	I0315 22:56:36.222797   83607 main.go:141] libmachine: (addons-097314) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 22:56:36.222802   83607 main.go:141] libmachine: (addons-097314) DBG |     </dhcp>
	I0315 22:56:36.222809   83607 main.go:141] libmachine: (addons-097314) DBG |   </ip>
	I0315 22:56:36.222814   83607 main.go:141] libmachine: (addons-097314) DBG |   
	I0315 22:56:36.222821   83607 main.go:141] libmachine: (addons-097314) DBG | </network>
	I0315 22:56:36.222826   83607 main.go:141] libmachine: (addons-097314) DBG | 
	I0315 22:56:36.228234   83607 main.go:141] libmachine: (addons-097314) DBG | trying to create private KVM network mk-addons-097314 192.168.39.0/24...
	I0315 22:56:36.292377   83607 main.go:141] libmachine: (addons-097314) DBG | private KVM network mk-addons-097314 192.168.39.0/24 created
	I0315 22:56:36.292409   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.292330   83629 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:36.292435   83607 main.go:141] libmachine: (addons-097314) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314 ...
	I0315 22:56:36.292455   83607 main.go:141] libmachine: (addons-097314) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 22:56:36.292481   83607 main.go:141] libmachine: (addons-097314) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 22:56:36.522653   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.522538   83629 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa...
	I0315 22:56:36.716752   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.716536   83629 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/addons-097314.rawdisk...
	I0315 22:56:36.716808   83607 main.go:141] libmachine: (addons-097314) DBG | Writing magic tar header
	I0315 22:56:36.716829   83607 main.go:141] libmachine: (addons-097314) DBG | Writing SSH key tar header
	I0315 22:56:36.716842   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:36.716703   83629 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314 ...
	I0315 22:56:36.716858   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314 (perms=drwx------)
	I0315 22:56:36.716910   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 22:56:36.716937   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314
	I0315 22:56:36.716947   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 22:56:36.716961   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 22:56:36.716970   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 22:56:36.716986   83607 main.go:141] libmachine: (addons-097314) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 22:56:36.716995   83607 main.go:141] libmachine: (addons-097314) Creating domain...
	I0315 22:56:36.717030   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 22:56:36.717066   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:36.717080   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 22:56:36.717088   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 22:56:36.717102   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home/jenkins
	I0315 22:56:36.717112   83607 main.go:141] libmachine: (addons-097314) DBG | Checking permissions on dir: /home
	I0315 22:56:36.717127   83607 main.go:141] libmachine: (addons-097314) DBG | Skipping /home - not owner
	I0315 22:56:36.718117   83607 main.go:141] libmachine: (addons-097314) define libvirt domain using xml: 
	I0315 22:56:36.718146   83607 main.go:141] libmachine: (addons-097314) <domain type='kvm'>
	I0315 22:56:36.718153   83607 main.go:141] libmachine: (addons-097314)   <name>addons-097314</name>
	I0315 22:56:36.718158   83607 main.go:141] libmachine: (addons-097314)   <memory unit='MiB'>4000</memory>
	I0315 22:56:36.718164   83607 main.go:141] libmachine: (addons-097314)   <vcpu>2</vcpu>
	I0315 22:56:36.718168   83607 main.go:141] libmachine: (addons-097314)   <features>
	I0315 22:56:36.718172   83607 main.go:141] libmachine: (addons-097314)     <acpi/>
	I0315 22:56:36.718176   83607 main.go:141] libmachine: (addons-097314)     <apic/>
	I0315 22:56:36.718181   83607 main.go:141] libmachine: (addons-097314)     <pae/>
	I0315 22:56:36.718187   83607 main.go:141] libmachine: (addons-097314)     
	I0315 22:56:36.718193   83607 main.go:141] libmachine: (addons-097314)   </features>
	I0315 22:56:36.718203   83607 main.go:141] libmachine: (addons-097314)   <cpu mode='host-passthrough'>
	I0315 22:56:36.718207   83607 main.go:141] libmachine: (addons-097314)   
	I0315 22:56:36.718214   83607 main.go:141] libmachine: (addons-097314)   </cpu>
	I0315 22:56:36.718221   83607 main.go:141] libmachine: (addons-097314)   <os>
	I0315 22:56:36.718229   83607 main.go:141] libmachine: (addons-097314)     <type>hvm</type>
	I0315 22:56:36.718237   83607 main.go:141] libmachine: (addons-097314)     <boot dev='cdrom'/>
	I0315 22:56:36.718242   83607 main.go:141] libmachine: (addons-097314)     <boot dev='hd'/>
	I0315 22:56:36.718250   83607 main.go:141] libmachine: (addons-097314)     <bootmenu enable='no'/>
	I0315 22:56:36.718254   83607 main.go:141] libmachine: (addons-097314)   </os>
	I0315 22:56:36.718259   83607 main.go:141] libmachine: (addons-097314)   <devices>
	I0315 22:56:36.718265   83607 main.go:141] libmachine: (addons-097314)     <disk type='file' device='cdrom'>
	I0315 22:56:36.718280   83607 main.go:141] libmachine: (addons-097314)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/boot2docker.iso'/>
	I0315 22:56:36.718289   83607 main.go:141] libmachine: (addons-097314)       <target dev='hdc' bus='scsi'/>
	I0315 22:56:36.718297   83607 main.go:141] libmachine: (addons-097314)       <readonly/>
	I0315 22:56:36.718301   83607 main.go:141] libmachine: (addons-097314)     </disk>
	I0315 22:56:36.718309   83607 main.go:141] libmachine: (addons-097314)     <disk type='file' device='disk'>
	I0315 22:56:36.718320   83607 main.go:141] libmachine: (addons-097314)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 22:56:36.718331   83607 main.go:141] libmachine: (addons-097314)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/addons-097314.rawdisk'/>
	I0315 22:56:36.718338   83607 main.go:141] libmachine: (addons-097314)       <target dev='hda' bus='virtio'/>
	I0315 22:56:36.718343   83607 main.go:141] libmachine: (addons-097314)     </disk>
	I0315 22:56:36.718350   83607 main.go:141] libmachine: (addons-097314)     <interface type='network'>
	I0315 22:56:36.718356   83607 main.go:141] libmachine: (addons-097314)       <source network='mk-addons-097314'/>
	I0315 22:56:36.718363   83607 main.go:141] libmachine: (addons-097314)       <model type='virtio'/>
	I0315 22:56:36.718369   83607 main.go:141] libmachine: (addons-097314)     </interface>
	I0315 22:56:36.718376   83607 main.go:141] libmachine: (addons-097314)     <interface type='network'>
	I0315 22:56:36.718381   83607 main.go:141] libmachine: (addons-097314)       <source network='default'/>
	I0315 22:56:36.718388   83607 main.go:141] libmachine: (addons-097314)       <model type='virtio'/>
	I0315 22:56:36.718393   83607 main.go:141] libmachine: (addons-097314)     </interface>
	I0315 22:56:36.718401   83607 main.go:141] libmachine: (addons-097314)     <serial type='pty'>
	I0315 22:56:36.718407   83607 main.go:141] libmachine: (addons-097314)       <target port='0'/>
	I0315 22:56:36.718414   83607 main.go:141] libmachine: (addons-097314)     </serial>
	I0315 22:56:36.718419   83607 main.go:141] libmachine: (addons-097314)     <console type='pty'>
	I0315 22:56:36.718428   83607 main.go:141] libmachine: (addons-097314)       <target type='serial' port='0'/>
	I0315 22:56:36.718435   83607 main.go:141] libmachine: (addons-097314)     </console>
	I0315 22:56:36.718440   83607 main.go:141] libmachine: (addons-097314)     <rng model='virtio'>
	I0315 22:56:36.718448   83607 main.go:141] libmachine: (addons-097314)       <backend model='random'>/dev/random</backend>
	I0315 22:56:36.718454   83607 main.go:141] libmachine: (addons-097314)     </rng>
	I0315 22:56:36.718459   83607 main.go:141] libmachine: (addons-097314)     
	I0315 22:56:36.718471   83607 main.go:141] libmachine: (addons-097314)     
	I0315 22:56:36.718482   83607 main.go:141] libmachine: (addons-097314)   </devices>
	I0315 22:56:36.718495   83607 main.go:141] libmachine: (addons-097314) </domain>
	I0315 22:56:36.718515   83607 main.go:141] libmachine: (addons-097314) 
	I0315 22:56:36.723108   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:4c:e1:0d in network default
	I0315 22:56:36.723640   83607 main.go:141] libmachine: (addons-097314) Ensuring networks are active...
	I0315 22:56:36.723660   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:36.724253   83607 main.go:141] libmachine: (addons-097314) Ensuring network default is active
	I0315 22:56:36.724487   83607 main.go:141] libmachine: (addons-097314) Ensuring network mk-addons-097314 is active
	I0315 22:56:36.724912   83607 main.go:141] libmachine: (addons-097314) Getting domain xml...
	I0315 22:56:36.725544   83607 main.go:141] libmachine: (addons-097314) Creating domain...
	I0315 22:56:37.902191   83607 main.go:141] libmachine: (addons-097314) Waiting to get IP...
	I0315 22:56:37.903004   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:37.903387   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:37.903430   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:37.903373   83629 retry.go:31] will retry after 235.474185ms: waiting for machine to come up
	I0315 22:56:38.140840   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:38.141345   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:38.141374   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:38.141307   83629 retry.go:31] will retry after 264.242261ms: waiting for machine to come up
	I0315 22:56:38.406766   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:38.407224   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:38.407251   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:38.407168   83629 retry.go:31] will retry after 360.617395ms: waiting for machine to come up
	I0315 22:56:38.769711   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:38.770095   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:38.770127   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:38.770047   83629 retry.go:31] will retry after 390.899063ms: waiting for machine to come up
	I0315 22:56:39.162804   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:39.163234   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:39.163266   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:39.163186   83629 retry.go:31] will retry after 668.450716ms: waiting for machine to come up
	I0315 22:56:39.833588   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:39.833981   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:39.834018   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:39.833918   83629 retry.go:31] will retry after 923.27146ms: waiting for machine to come up
	I0315 22:56:40.758954   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:40.759298   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:40.759348   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:40.759247   83629 retry.go:31] will retry after 1.180578271s: waiting for machine to come up
	I0315 22:56:41.941457   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:41.942001   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:41.942029   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:41.941956   83629 retry.go:31] will retry after 1.155606203s: waiting for machine to come up
	I0315 22:56:43.099358   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:43.099823   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:43.099856   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:43.099775   83629 retry.go:31] will retry after 1.855181258s: waiting for machine to come up
	I0315 22:56:44.956293   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:44.956662   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:44.956691   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:44.956619   83629 retry.go:31] will retry after 2.062737263s: waiting for machine to come up
	I0315 22:56:47.020698   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:47.021211   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:47.021243   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:47.021158   83629 retry.go:31] will retry after 1.849288333s: waiting for machine to come up
	I0315 22:56:48.873145   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:48.873573   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:48.873606   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:48.873532   83629 retry.go:31] will retry after 2.428758066s: waiting for machine to come up
	I0315 22:56:51.303807   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:51.304223   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:51.304250   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:51.304161   83629 retry.go:31] will retry after 3.707319346s: waiting for machine to come up
	I0315 22:56:55.012756   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:56:55.013238   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find current IP address of domain addons-097314 in network mk-addons-097314
	I0315 22:56:55.013261   83607 main.go:141] libmachine: (addons-097314) DBG | I0315 22:56:55.013188   83629 retry.go:31] will retry after 5.268140743s: waiting for machine to come up
	I0315 22:57:00.285845   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.286302   83607 main.go:141] libmachine: (addons-097314) Found IP for machine: 192.168.39.35
	I0315 22:57:00.286331   83607 main.go:141] libmachine: (addons-097314) Reserving static IP address...
	I0315 22:57:00.286344   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has current primary IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.286767   83607 main.go:141] libmachine: (addons-097314) DBG | unable to find host DHCP lease matching {name: "addons-097314", mac: "52:54:00:63:6b:cb", ip: "192.168.39.35"} in network mk-addons-097314
	I0315 22:57:00.359137   83607 main.go:141] libmachine: (addons-097314) Reserved static IP address: 192.168.39.35
	I0315 22:57:00.359174   83607 main.go:141] libmachine: (addons-097314) DBG | Getting to WaitForSSH function...
	I0315 22:57:00.359183   83607 main.go:141] libmachine: (addons-097314) Waiting for SSH to be available...
	I0315 22:57:00.361688   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.362132   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:minikube Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.362167   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.362256   83607 main.go:141] libmachine: (addons-097314) DBG | Using SSH client type: external
	I0315 22:57:00.362286   83607 main.go:141] libmachine: (addons-097314) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa (-rw-------)
	I0315 22:57:00.362308   83607 main.go:141] libmachine: (addons-097314) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.35 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 22:57:00.362321   83607 main.go:141] libmachine: (addons-097314) DBG | About to run SSH command:
	I0315 22:57:00.362330   83607 main.go:141] libmachine: (addons-097314) DBG | exit 0
	I0315 22:57:00.483305   83607 main.go:141] libmachine: (addons-097314) DBG | SSH cmd err, output: <nil>: 
	I0315 22:57:00.483669   83607 main.go:141] libmachine: (addons-097314) KVM machine creation complete!
	I0315 22:57:00.483956   83607 main.go:141] libmachine: (addons-097314) Calling .GetConfigRaw
	I0315 22:57:00.484531   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:00.484758   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:00.484936   83607 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 22:57:00.484955   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:00.486174   83607 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 22:57:00.486187   83607 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 22:57:00.486193   83607 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 22:57:00.486199   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.488861   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.489203   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.489232   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.489373   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.489609   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.489803   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.489974   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.490127   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.490387   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.490400   83607 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 22:57:00.590901   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 22:57:00.590930   83607 main.go:141] libmachine: Detecting the provisioner...
	I0315 22:57:00.590939   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.593885   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.594196   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.594228   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.594337   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.594563   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.594704   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.594840   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.595043   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.595212   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.595226   83607 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 22:57:00.696110   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 22:57:00.696278   83607 main.go:141] libmachine: found compatible host: buildroot
	I0315 22:57:00.696297   83607 main.go:141] libmachine: Provisioning with buildroot...
	I0315 22:57:00.696309   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:57:00.696593   83607 buildroot.go:166] provisioning hostname "addons-097314"
	I0315 22:57:00.696615   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:57:00.696799   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.699431   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.699756   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.699786   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.699883   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.700064   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.700209   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.700331   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.700467   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.700637   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.700649   83607 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-097314 && echo "addons-097314" | sudo tee /etc/hostname
	I0315 22:57:00.813703   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-097314
	
	I0315 22:57:00.813737   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.816631   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.817125   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.817161   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.817316   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:00.817511   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.817664   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:00.817775   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:00.817917   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:00.818195   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:00.818225   83607 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-097314' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-097314/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-097314' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 22:57:00.928753   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 22:57:00.928784   83607 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 22:57:00.928841   83607 buildroot.go:174] setting up certificates
	I0315 22:57:00.928860   83607 provision.go:84] configureAuth start
	I0315 22:57:00.928875   83607 main.go:141] libmachine: (addons-097314) Calling .GetMachineName
	I0315 22:57:00.929153   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:00.932113   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.932481   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.932515   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.932675   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:00.934999   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.935332   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:00.935362   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:00.935470   83607 provision.go:143] copyHostCerts
	I0315 22:57:00.935546   83607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 22:57:00.935694   83607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 22:57:00.935764   83607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 22:57:00.935812   83607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.addons-097314 san=[127.0.0.1 192.168.39.35 addons-097314 localhost minikube]
	I0315 22:57:01.227741   83607 provision.go:177] copyRemoteCerts
	I0315 22:57:01.227827   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 22:57:01.227864   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.230750   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.231043   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.231075   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.231233   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.231461   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.231652   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.231811   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.310334   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 22:57:01.337752   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 22:57:01.362073   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 22:57:01.386359   83607 provision.go:87] duration metric: took 457.484131ms to configureAuth
	I0315 22:57:01.386392   83607 buildroot.go:189] setting minikube options for container-runtime
	I0315 22:57:01.386594   83607 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:57:01.386672   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.389587   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.389974   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.390005   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.390235   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.390400   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.390546   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.390666   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.390862   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:01.391021   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:01.391035   83607 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 22:57:01.659917   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 22:57:01.659951   83607 main.go:141] libmachine: Checking connection to Docker...
	I0315 22:57:01.659963   83607 main.go:141] libmachine: (addons-097314) Calling .GetURL
	I0315 22:57:01.661291   83607 main.go:141] libmachine: (addons-097314) DBG | Using libvirt version 6000000
	I0315 22:57:01.663659   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.664117   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.664146   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.664365   83607 main.go:141] libmachine: Docker is up and running!
	I0315 22:57:01.664382   83607 main.go:141] libmachine: Reticulating splines...
	I0315 22:57:01.664390   83607 client.go:171] duration metric: took 25.731043156s to LocalClient.Create
	I0315 22:57:01.664413   83607 start.go:167] duration metric: took 25.731106407s to libmachine.API.Create "addons-097314"
	I0315 22:57:01.664424   83607 start.go:293] postStartSetup for "addons-097314" (driver="kvm2")
	I0315 22:57:01.664443   83607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 22:57:01.664462   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.664681   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 22:57:01.664706   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.667056   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.667363   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.667395   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.667566   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.667737   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.667920   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.668086   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.746265   83607 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 22:57:01.750691   83607 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 22:57:01.750719   83607 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 22:57:01.750801   83607 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 22:57:01.750831   83607 start.go:296] duration metric: took 86.399698ms for postStartSetup
	I0315 22:57:01.750870   83607 main.go:141] libmachine: (addons-097314) Calling .GetConfigRaw
	I0315 22:57:01.751459   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:01.754108   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.754508   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.754530   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.754772   83607 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/config.json ...
	I0315 22:57:01.754971   83607 start.go:128] duration metric: took 25.840419899s to createHost
	I0315 22:57:01.754997   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.756951   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.757236   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.757267   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.757384   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.757553   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.757703   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.757839   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.758021   83607 main.go:141] libmachine: Using SSH client type: native
	I0315 22:57:01.758165   83607 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.35 22 <nil> <nil>}
	I0315 22:57:01.758176   83607 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 22:57:01.860565   83607 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710543421.834574284
	
	I0315 22:57:01.860596   83607 fix.go:216] guest clock: 1710543421.834574284
	I0315 22:57:01.860607   83607 fix.go:229] Guest: 2024-03-15 22:57:01.834574284 +0000 UTC Remote: 2024-03-15 22:57:01.754984136 +0000 UTC m=+25.954352188 (delta=79.590148ms)
	I0315 22:57:01.860634   83607 fix.go:200] guest clock delta is within tolerance: 79.590148ms
	I0315 22:57:01.860642   83607 start.go:83] releasing machines lock for "addons-097314", held for 25.946184894s
	I0315 22:57:01.860673   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.860978   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:01.863608   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.863995   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.864017   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.864147   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.864712   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.864891   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:01.864986   83607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 22:57:01.865052   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.865095   83607 ssh_runner.go:195] Run: cat /version.json
	I0315 22:57:01.865115   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:01.867777   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.867863   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.868206   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.868228   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:01.868249   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.868266   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:01.868442   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.868443   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:01.868695   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.868711   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:01.868909   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.868920   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:01.869107   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.869104   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:01.964885   83607 ssh_runner.go:195] Run: systemctl --version
	I0315 22:57:01.971145   83607 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 22:57:02.128387   83607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 22:57:02.135913   83607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 22:57:02.136011   83607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 22:57:02.152257   83607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 22:57:02.152281   83607 start.go:494] detecting cgroup driver to use...
	I0315 22:57:02.152363   83607 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 22:57:02.169043   83607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 22:57:02.183278   83607 docker.go:217] disabling cri-docker service (if available) ...
	I0315 22:57:02.183361   83607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 22:57:02.198110   83607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 22:57:02.212959   83607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 22:57:02.325860   83607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 22:57:02.459933   83607 docker.go:233] disabling docker service ...
	I0315 22:57:02.460022   83607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 22:57:02.474725   83607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 22:57:02.487639   83607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 22:57:02.620047   83607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 22:57:02.746721   83607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 22:57:02.760963   83607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 22:57:02.779731   83607 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 22:57:02.779805   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.790166   83607 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 22:57:02.790234   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.800430   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.810707   83607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 22:57:02.820923   83607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 22:57:02.831280   83607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 22:57:02.840562   83607 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 22:57:02.840630   83607 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 22:57:02.853802   83607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 22:57:02.862950   83607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 22:57:02.980844   83607 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 22:57:03.120711   83607 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 22:57:03.120818   83607 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 22:57:03.126187   83607 start.go:562] Will wait 60s for crictl version
	I0315 22:57:03.126267   83607 ssh_runner.go:195] Run: which crictl
	I0315 22:57:03.130398   83607 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 22:57:03.167541   83607 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 22:57:03.167638   83607 ssh_runner.go:195] Run: crio --version
	I0315 22:57:03.196364   83607 ssh_runner.go:195] Run: crio --version
	I0315 22:57:03.227014   83607 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 22:57:03.228403   83607 main.go:141] libmachine: (addons-097314) Calling .GetIP
	I0315 22:57:03.231113   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:03.231522   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:03.231544   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:03.231776   83607 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 22:57:03.236325   83607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 22:57:03.249536   83607 kubeadm.go:877] updating cluster {Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 22:57:03.249650   83607 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 22:57:03.249712   83607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 22:57:03.282859   83607 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 22:57:03.282929   83607 ssh_runner.go:195] Run: which lz4
	I0315 22:57:03.287274   83607 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 22:57:03.291632   83607 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 22:57:03.291669   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 22:57:04.831791   83607 crio.go:444] duration metric: took 1.544556745s to copy over tarball
	I0315 22:57:04.831904   83607 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 22:57:07.421547   83607 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.589597094s)
	I0315 22:57:07.421597   83607 crio.go:451] duration metric: took 2.589769279s to extract the tarball
	I0315 22:57:07.421609   83607 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 22:57:07.463807   83607 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 22:57:07.516830   83607 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 22:57:07.516862   83607 cache_images.go:84] Images are preloaded, skipping loading
	I0315 22:57:07.516871   83607 kubeadm.go:928] updating node { 192.168.39.35 8443 v1.28.4 crio true true} ...
	I0315 22:57:07.517007   83607 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-097314 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.35
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 22:57:07.517074   83607 ssh_runner.go:195] Run: crio config
	I0315 22:57:07.577011   83607 cni.go:84] Creating CNI manager for ""
	I0315 22:57:07.577037   83607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:57:07.577052   83607 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 22:57:07.577082   83607 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.35 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-097314 NodeName:addons-097314 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.35"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.35 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 22:57:07.577260   83607 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.35
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-097314"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.35
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.35"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 22:57:07.577345   83607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 22:57:07.587748   83607 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 22:57:07.587831   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 22:57:07.597280   83607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0315 22:57:07.614689   83607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 22:57:07.631989   83607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0315 22:57:07.650950   83607 ssh_runner.go:195] Run: grep 192.168.39.35	control-plane.minikube.internal$ /etc/hosts
	I0315 22:57:07.655173   83607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.35	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 22:57:07.667194   83607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 22:57:07.787494   83607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 22:57:07.804477   83607 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314 for IP: 192.168.39.35
	I0315 22:57:07.804509   83607 certs.go:194] generating shared ca certs ...
	I0315 22:57:07.804541   83607 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:07.804710   83607 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 22:57:07.984834   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt ...
	I0315 22:57:07.984868   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt: {Name:mk3c02333392a6c3484e85a7518b751e968d59cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:07.985054   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key ...
	I0315 22:57:07.985068   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key: {Name:mk21576eef6d3218697b62737d69b1ef1151dfed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:07.985143   83607 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 22:57:08.057203   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt ...
	I0315 22:57:08.057239   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt: {Name:mk6e95ddb451577f3d23ae9dc52b109da94def40 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.057411   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key ...
	I0315 22:57:08.057424   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key: {Name:mk2ce7a41e9e2a5497efd806366764a0af769c47 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.057495   83607 certs.go:256] generating profile certs ...
	I0315 22:57:08.057556   83607 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.key
	I0315 22:57:08.057577   83607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt with IP's: []
	I0315 22:57:08.164041   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt ...
	I0315 22:57:08.164074   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: {Name:mk2eaa2f399cb2eaafc178b08e708540f2ded1ba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.164233   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.key ...
	I0315 22:57:08.164245   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.key: {Name:mkd675fdcce47d9432783a09331b990f01e8f7c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.164312   83607 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65
	I0315 22:57:08.164352   83607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.35]
	I0315 22:57:08.323595   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65 ...
	I0315 22:57:08.323632   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65: {Name:mk006728a1b0349596d9911ad44e9bd106cf826e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.323825   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65 ...
	I0315 22:57:08.323845   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65: {Name:mkf8f076d3e10addd5084544690433f5ba38b7d0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.323943   83607 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt.4a742f65 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt
	I0315 22:57:08.324108   83607 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key.4a742f65 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key
	I0315 22:57:08.324185   83607 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key
	I0315 22:57:08.324212   83607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt with IP's: []
	I0315 22:57:08.469102   83607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt ...
	I0315 22:57:08.469133   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt: {Name:mk84532477fc1d12e43e727f4f4b0d0ea6f9c99c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.469315   83607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key ...
	I0315 22:57:08.469335   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key: {Name:mkab0511c5c861eca417cd847473ba1dd53b7b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:08.469685   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 22:57:08.469740   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 22:57:08.469777   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 22:57:08.469805   83607 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 22:57:08.470511   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 22:57:08.519837   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 22:57:08.548815   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 22:57:08.579224   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 22:57:08.603353   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0315 22:57:08.627874   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 22:57:08.653357   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 22:57:08.678971   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 22:57:08.704191   83607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 22:57:08.728497   83607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 22:57:08.746057   83607 ssh_runner.go:195] Run: openssl version
	I0315 22:57:08.751916   83607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 22:57:08.762904   83607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 22:57:08.767480   83607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 22:57:08.767556   83607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 22:57:08.773119   83607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 22:57:08.783476   83607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 22:57:08.787493   83607 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 22:57:08.787537   83607 kubeadm.go:391] StartCluster: {Name:addons-097314 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-097314 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 22:57:08.787653   83607 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 22:57:08.787707   83607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 22:57:08.822494   83607 cri.go:89] found id: ""
	I0315 22:57:08.822584   83607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 22:57:08.832679   83607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 22:57:08.842513   83607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 22:57:08.852068   83607 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 22:57:08.852093   83607 kubeadm.go:156] found existing configuration files:
	
	I0315 22:57:08.852141   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 22:57:08.860906   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 22:57:08.860961   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 22:57:08.870011   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 22:57:08.878548   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 22:57:08.878588   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 22:57:08.887629   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 22:57:08.896295   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 22:57:08.896350   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 22:57:08.905252   83607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 22:57:08.914336   83607 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 22:57:08.914390   83607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 22:57:08.923446   83607 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 22:57:09.112409   83607 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 22:57:19.250947   83607 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 22:57:19.251028   83607 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 22:57:19.251087   83607 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 22:57:19.251205   83607 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 22:57:19.251300   83607 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 22:57:19.251377   83607 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 22:57:19.253120   83607 out.go:204]   - Generating certificates and keys ...
	I0315 22:57:19.253203   83607 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 22:57:19.253258   83607 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 22:57:19.253374   83607 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 22:57:19.253464   83607 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 22:57:19.253556   83607 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 22:57:19.253642   83607 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 22:57:19.253737   83607 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 22:57:19.253908   83607 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-097314 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I0315 22:57:19.253996   83607 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 22:57:19.254139   83607 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-097314 localhost] and IPs [192.168.39.35 127.0.0.1 ::1]
	I0315 22:57:19.254224   83607 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 22:57:19.254324   83607 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 22:57:19.254393   83607 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 22:57:19.254480   83607 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 22:57:19.254564   83607 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 22:57:19.254666   83607 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 22:57:19.254754   83607 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 22:57:19.254829   83607 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 22:57:19.254950   83607 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 22:57:19.255044   83607 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 22:57:19.256742   83607 out.go:204]   - Booting up control plane ...
	I0315 22:57:19.256861   83607 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 22:57:19.257008   83607 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 22:57:19.257093   83607 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 22:57:19.257247   83607 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 22:57:19.257369   83607 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 22:57:19.257426   83607 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 22:57:19.257702   83607 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 22:57:19.257802   83607 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.002763 seconds
	I0315 22:57:19.257940   83607 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 22:57:19.258110   83607 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 22:57:19.258187   83607 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 22:57:19.258426   83607 kubeadm.go:309] [mark-control-plane] Marking the node addons-097314 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 22:57:19.258511   83607 kubeadm.go:309] [bootstrap-token] Using token: qikmp3.n4r8rw2ox0aq6wwt
	I0315 22:57:19.260150   83607 out.go:204]   - Configuring RBAC rules ...
	I0315 22:57:19.260305   83607 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 22:57:19.260409   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 22:57:19.260567   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 22:57:19.260705   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 22:57:19.260833   83607 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 22:57:19.260997   83607 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 22:57:19.261166   83607 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 22:57:19.261229   83607 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 22:57:19.261276   83607 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 22:57:19.261287   83607 kubeadm.go:309] 
	I0315 22:57:19.261341   83607 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 22:57:19.261348   83607 kubeadm.go:309] 
	I0315 22:57:19.261442   83607 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 22:57:19.261454   83607 kubeadm.go:309] 
	I0315 22:57:19.261478   83607 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 22:57:19.261556   83607 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 22:57:19.261631   83607 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 22:57:19.261641   83607 kubeadm.go:309] 
	I0315 22:57:19.261712   83607 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 22:57:19.261721   83607 kubeadm.go:309] 
	I0315 22:57:19.261791   83607 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 22:57:19.261804   83607 kubeadm.go:309] 
	I0315 22:57:19.261872   83607 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 22:57:19.261996   83607 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 22:57:19.262091   83607 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 22:57:19.262106   83607 kubeadm.go:309] 
	I0315 22:57:19.262187   83607 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 22:57:19.262301   83607 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 22:57:19.262318   83607 kubeadm.go:309] 
	I0315 22:57:19.262422   83607 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qikmp3.n4r8rw2ox0aq6wwt \
	I0315 22:57:19.262566   83607 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0315 22:57:19.262600   83607 kubeadm.go:309] 	--control-plane 
	I0315 22:57:19.262606   83607 kubeadm.go:309] 
	I0315 22:57:19.262708   83607 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 22:57:19.262717   83607 kubeadm.go:309] 
	I0315 22:57:19.262815   83607 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qikmp3.n4r8rw2ox0aq6wwt \
	I0315 22:57:19.262954   83607 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0315 22:57:19.262968   83607 cni.go:84] Creating CNI manager for ""
	I0315 22:57:19.262978   83607 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:57:19.264538   83607 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0315 22:57:19.265852   83607 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0315 22:57:19.294335   83607 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0315 22:57:19.376103   83607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 22:57:19.376191   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:19.376201   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-097314 minikube.k8s.io/updated_at=2024_03_15T22_57_19_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=addons-097314 minikube.k8s.io/primary=true
	I0315 22:57:19.549271   83607 ops.go:34] apiserver oom_adj: -16
	I0315 22:57:19.549427   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:20.049527   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:20.549464   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:21.049925   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:21.550386   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:22.050106   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:22.549967   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:23.050132   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:23.549826   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:24.050381   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:24.550084   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:25.050076   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:25.550283   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:26.050183   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:26.549587   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:27.050401   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:27.550312   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:28.049583   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:28.549907   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:29.049662   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:29.549923   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:30.049542   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:30.549565   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:31.050062   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:31.550402   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:32.049494   83607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 22:57:32.201740   83607 kubeadm.go:1107] duration metric: took 12.825618077s to wait for elevateKubeSystemPrivileges
	W0315 22:57:32.201780   83607 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 22:57:32.201789   83607 kubeadm.go:393] duration metric: took 23.4142571s to StartCluster
	I0315 22:57:32.201807   83607 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:32.201929   83607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:57:32.202274   83607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:57:32.202454   83607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 22:57:32.202482   83607 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.35 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 22:57:32.204449   83607 out.go:177] * Verifying Kubernetes components...
	I0315 22:57:32.202573   83607 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0315 22:57:32.202667   83607 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:57:32.206096   83607 addons.go:69] Setting inspektor-gadget=true in profile "addons-097314"
	I0315 22:57:32.206112   83607 addons.go:69] Setting gcp-auth=true in profile "addons-097314"
	I0315 22:57:32.206112   83607 addons.go:69] Setting yakd=true in profile "addons-097314"
	I0315 22:57:32.206135   83607 mustload.go:65] Loading cluster: addons-097314
	I0315 22:57:32.206145   83607 addons.go:234] Setting addon inspektor-gadget=true in "addons-097314"
	I0315 22:57:32.206151   83607 addons.go:69] Setting metrics-server=true in profile "addons-097314"
	I0315 22:57:32.206147   83607 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-097314"
	I0315 22:57:32.206171   83607 addons.go:234] Setting addon metrics-server=true in "addons-097314"
	I0315 22:57:32.206183   83607 addons.go:69] Setting ingress=true in profile "addons-097314"
	I0315 22:57:32.206194   83607 addons.go:69] Setting registry=true in profile "addons-097314"
	I0315 22:57:32.206195   83607 addons.go:69] Setting ingress-dns=true in profile "addons-097314"
	I0315 22:57:32.206201   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206215   83607 addons.go:234] Setting addon ingress=true in "addons-097314"
	I0315 22:57:32.206217   83607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 22:57:32.206230   83607 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-097314"
	I0315 22:57:32.206233   83607 addons.go:234] Setting addon registry=true in "addons-097314"
	I0315 22:57:32.206249   83607 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-097314"
	I0315 22:57:32.206263   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206275   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206179   83607 addons.go:69] Setting storage-provisioner=true in profile "addons-097314"
	I0315 22:57:32.206327   83607 addons.go:234] Setting addon storage-provisioner=true in "addons-097314"
	I0315 22:57:32.206334   83607 config.go:182] Loaded profile config "addons-097314": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 22:57:32.206356   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206402   83607 addons.go:69] Setting volumesnapshots=true in profile "addons-097314"
	I0315 22:57:32.206424   83607 addons.go:234] Setting addon volumesnapshots=true in "addons-097314"
	I0315 22:57:32.206442   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206525   83607 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-097314"
	I0315 22:57:32.206557   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206749   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206766   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206767   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206146   83607 addons.go:234] Setting addon yakd=true in "addons-097314"
	I0315 22:57:32.206794   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206801   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206807   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206219   83607 addons.go:234] Setting addon ingress-dns=true in "addons-097314"
	I0315 22:57:32.206813   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206821   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206841   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206156   83607 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-097314"
	I0315 22:57:32.206873   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206883   83607 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-097314"
	I0315 22:57:32.206891   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206799   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206908   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206911   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206105   83607 addons.go:69] Setting cloud-spanner=true in profile "addons-097314"
	I0315 22:57:32.206975   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.206979   83607 addons.go:234] Setting addon cloud-spanner=true in "addons-097314"
	I0315 22:57:32.206993   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.206188   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.206136   83607 addons.go:69] Setting default-storageclass=true in profile "addons-097314"
	I0315 22:57:32.206096   83607 addons.go:69] Setting helm-tiller=true in profile "addons-097314"
	I0315 22:57:32.207112   83607 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-097314"
	I0315 22:57:32.207129   83607 addons.go:234] Setting addon helm-tiller=true in "addons-097314"
	I0315 22:57:32.207346   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207355   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207478   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207373   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.207371   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.207391   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207517   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207394   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.207543   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207841   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207863   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207879   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207883   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207887   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.207907   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207410   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.208129   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.207416   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.208265   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.227775   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43805
	I0315 22:57:32.227980   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33085
	I0315 22:57:32.228082   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36663
	I0315 22:57:32.228547   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.228569   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.228579   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.229078   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.229100   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.229081   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.229155   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.229169   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.229185   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.229501   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.229561   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.229619   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.229641   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39003
	I0315 22:57:32.230126   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.230128   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.230157   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.230169   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.230329   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.230718   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.230741   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.230809   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.231074   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.236310   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43921
	I0315 22:57:32.236644   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.237146   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.237169   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.237516   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.237608   83607 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-097314"
	I0315 22:57:32.237653   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.238016   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.238025   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.238042   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.238056   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.238405   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.238441   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.269438   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44921
	I0315 22:57:32.269701   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39777
	I0315 22:57:32.270010   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.270212   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.270714   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.270731   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.271076   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.271293   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.271310   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.271520   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33573
	I0315 22:57:32.271749   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.271793   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.271906   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.272494   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.272496   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.272548   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.272692   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.272968   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.273135   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.274940   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.275359   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.275402   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.276172   83607 addons.go:234] Setting addon default-storageclass=true in "addons-097314"
	I0315 22:57:32.276219   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:32.276597   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.276625   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.277627   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44051
	I0315 22:57:32.277614   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32927
	I0315 22:57:32.277990   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.278063   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.278428   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.278451   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.278785   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.278874   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.278903   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.279355   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.279394   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.279595   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.281448   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34189
	I0315 22:57:32.282145   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.282774   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.282791   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.283426   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.283716   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.285887   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41911
	I0315 22:57:32.286388   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.286931   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.286950   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.287585   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.288222   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.288263   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.288878   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.288907   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.289370   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0315 22:57:32.291253   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40207
	I0315 22:57:32.291482   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.293685   83607 out.go:177]   - Using image docker.io/registry:2.8.3
	I0315 22:57:32.291963   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.292308   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.292792   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0315 22:57:32.293544   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43829
	I0315 22:57:32.294104   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0315 22:57:32.296184   83607 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0315 22:57:32.297472   83607 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0315 22:57:32.297494   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0315 22:57:32.297514   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.295474   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.295519   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.295628   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.297647   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.295715   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.296312   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.297711   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.296647   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33731
	I0315 22:57:32.297122   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44243
	I0315 22:57:32.298300   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.298317   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.298456   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.298469   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.298919   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.299015   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299076   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299114   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299156   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.299267   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.299346   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42013
	I0315 22:57:32.299753   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.299796   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.300096   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.300112   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.300128   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.300158   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.300241   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.300252   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.300364   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.300375   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.300571   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.300735   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.301256   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.301293   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.301476   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.301528   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.301662   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.301979   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.302034   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33987
	I0315 22:57:32.302322   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.302410   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.302739   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.302758   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.303123   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.303691   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.303740   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.303955   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.303998   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.304235   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.304253   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.306025   83607 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0315 22:57:32.304469   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.304676   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.304724   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.305835   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.306504   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.307533   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0315 22:57:32.307549   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0315 22:57:32.307569   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.307717   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.309308   83607 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0315 22:57:32.307977   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.308067   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.310773   83607 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0315 22:57:32.310866   83607 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0315 22:57:32.310888   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.311019   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.311615   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.312199   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0315 22:57:32.312271   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0315 22:57:32.312330   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.312393   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.312479   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.312610   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.313914   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0315 22:57:32.313943   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.315151   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0315 22:57:32.315166   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0315 22:57:32.315184   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.315231   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.316693   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0315 22:57:32.315446   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.315490   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41739
	I0315 22:57:32.316971   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.319296   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0315 22:57:32.317955   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.317650   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32875
	I0315 22:57:32.318229   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.318388   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.320471   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.322437   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0315 22:57:32.322495   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.321542   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.321590   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.321788   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.322028   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.321331   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.323931   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0315 22:57:32.324050   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.324117   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.324141   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.324322   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.324360   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.324512   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.325258   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45499
	I0315 22:57:32.325442   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45309
	I0315 22:57:32.325841   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0315 22:57:32.325901   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.325988   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.326283   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.326295   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.326320   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42129
	I0315 22:57:32.326341   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.326340   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.326380   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.327465   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0315 22:57:32.328752   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0315 22:57:32.328768   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0315 22:57:32.328783   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.327726   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.327867   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.327916   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.328072   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.328862   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.328168   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.328902   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.328164   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.328992   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.328203   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.329231   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.329277   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.329622   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.329863   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:32.329907   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:32.330340   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0315 22:57:32.331055   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.331408   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.331435   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.331581   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.331597   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.332165   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.332334   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.334312   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.334312   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.334326   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.334346   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.334346   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.334378   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.334315   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.334399   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.336351   83607 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0315 22:57:32.334866   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.335156   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.337688   83607 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 22:57:32.337769   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0315 22:57:32.337791   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.337739   83607 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 22:57:32.339302   83607 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 22:57:32.339329   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 22:57:32.339348   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.338532   83607 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0315 22:57:32.338704   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.340624   83607 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 22:57:32.340644   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0315 22:57:32.340661   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.340719   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.342023   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46799
	I0315 22:57:32.342754   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.342952   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.343467   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.343486   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.343682   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.343706   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.343999   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.344222   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.344408   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.344998   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.345251   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.345283   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.345328   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43881
	I0315 22:57:32.345440   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.345524   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.345532   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.345793   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.347450   83607 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0315 22:57:32.346064   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.348726   83607 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0315 22:57:32.348741   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0315 22:57:32.348759   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.346122   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.348809   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.346152   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.346265   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.346309   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.347362   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.347930   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.349121   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.350713   83607 out.go:177]   - Using image docker.io/busybox:stable
	I0315 22:57:32.349585   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.349609   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.350994   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41051
	I0315 22:57:32.353211   83607 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0315 22:57:32.353222   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.354443   83607 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 22:57:32.354462   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0315 22:57:32.354463   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.354479   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.354484   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.352214   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.352254   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.352163   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.353854   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.354043   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.354722   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.354806   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.355378   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.355412   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.355435   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.355887   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.355905   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.356094   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.356749   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.356932   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.357507   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.359162   83607 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0315 22:57:32.358099   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37933
	I0315 22:57:32.358134   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43961
	I0315 22:57:32.358674   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.359097   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.359386   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.360400   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0315 22:57:32.360408   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0315 22:57:32.360418   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.360558   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.360581   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.361199   83607 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 22:57:32.361209   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 22:57:32.361220   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.361271   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.361491   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.361594   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:32.361859   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.362106   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.362123   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.362182   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.362225   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:32.362238   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:32.362724   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.362890   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.363990   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:32.364310   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:32.364859   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.366798   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0315 22:57:32.365701   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.366796   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.365941   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:32.366205   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.366934   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.366680   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.366838   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.368310   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.366959   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.367209   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.367230   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.369430   83607 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0315 22:57:32.368283   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 22:57:32.368489   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.368508   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.370793   83607 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0315 22:57:32.370806   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0315 22:57:32.370817   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.371459   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.372168   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 22:57:32.371657   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.373571   83607 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 22:57:32.373591   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0315 22:57:32.373608   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:32.374099   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.374826   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.374849   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.375009   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.375220   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.375448   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.375673   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:32.376993   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.377415   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:32.377438   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:32.377576   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:32.377717   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:32.377849   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:32.377966   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	W0315 22:57:32.383985   83607 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:40180->192.168.39.35:22: read: connection reset by peer
	I0315 22:57:32.384015   83607 retry.go:31] will retry after 285.366853ms: ssh: handshake failed: read tcp 192.168.39.1:40180->192.168.39.35:22: read: connection reset by peer
	I0315 22:57:32.726493   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0315 22:57:32.726519   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0315 22:57:32.842629   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 22:57:32.912616   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0315 22:57:32.913030   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0315 22:57:32.913056   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0315 22:57:32.914092   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0315 22:57:32.914109   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0315 22:57:32.922583   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0315 22:57:32.976537   83607 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0315 22:57:32.976564   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0315 22:57:32.977987   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0315 22:57:32.982332   83607 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0315 22:57:32.982353   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0315 22:57:32.992976   83607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 22:57:32.993916   83607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 22:57:32.994486   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0315 22:57:32.994512   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0315 22:57:32.996991   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0315 22:57:32.997010   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0315 22:57:32.997892   83607 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0315 22:57:32.997911   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0315 22:57:33.007758   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0315 22:57:33.051670   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 22:57:33.060836   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0315 22:57:33.060859   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0315 22:57:33.063103   83607 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0315 22:57:33.063123   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0315 22:57:33.092567   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0315 22:57:33.103535   83607 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0315 22:57:33.103558   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0315 22:57:33.145229   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0315 22:57:33.145258   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0315 22:57:33.168360   83607 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 22:57:33.168386   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0315 22:57:33.192767   83607 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0315 22:57:33.192805   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0315 22:57:33.262222   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0315 22:57:33.262245   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0315 22:57:33.275581   83607 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0315 22:57:33.275607   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0315 22:57:33.294246   83607 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0315 22:57:33.294278   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0315 22:57:33.318841   83607 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0315 22:57:33.318865   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0315 22:57:33.476720   83607 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0315 22:57:33.476756   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0315 22:57:33.486999   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0315 22:57:33.487025   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0315 22:57:33.489628   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0315 22:57:33.543998   83607 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0315 22:57:33.544029   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0315 22:57:33.563190   83607 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0315 22:57:33.563213   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0315 22:57:33.563812   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0315 22:57:33.642818   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0315 22:57:33.750675   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0315 22:57:33.750714   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0315 22:57:33.760839   83607 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0315 22:57:33.760869   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0315 22:57:33.799639   83607 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0315 22:57:33.799665   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0315 22:57:33.836039   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0315 22:57:34.124867   83607 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 22:57:34.124891   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0315 22:57:34.157962   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0315 22:57:34.157999   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0315 22:57:34.168968   83607 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0315 22:57:34.168996   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0315 22:57:34.344508   83607 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 22:57:34.344533   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0315 22:57:34.354120   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0315 22:57:34.354146   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0315 22:57:34.463671   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 22:57:34.584035   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0315 22:57:34.584065   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0315 22:57:34.609191   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0315 22:57:34.824700   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0315 22:57:34.824724   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0315 22:57:34.904906   83607 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 22:57:34.904938   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0315 22:57:35.037346   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0315 22:57:39.012868   83607 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0315 22:57:39.012910   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:39.016479   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.016980   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:39.017006   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.017225   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:39.017436   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:39.017658   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:39.017836   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:39.240199   83607 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0315 22:57:39.426529   83607 addons.go:234] Setting addon gcp-auth=true in "addons-097314"
	I0315 22:57:39.426587   83607 host.go:66] Checking if "addons-097314" exists ...
	I0315 22:57:39.426885   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:39.426912   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:39.443616   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40849
	I0315 22:57:39.444187   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:39.444736   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:39.444770   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:39.445159   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:39.445687   83607 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 22:57:39.445729   83607 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 22:57:39.461509   83607 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43575
	I0315 22:57:39.461999   83607 main.go:141] libmachine: () Calling .GetVersion
	I0315 22:57:39.462492   83607 main.go:141] libmachine: Using API Version  1
	I0315 22:57:39.462513   83607 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 22:57:39.462905   83607 main.go:141] libmachine: () Calling .GetMachineName
	I0315 22:57:39.463153   83607 main.go:141] libmachine: (addons-097314) Calling .GetState
	I0315 22:57:39.464855   83607 main.go:141] libmachine: (addons-097314) Calling .DriverName
	I0315 22:57:39.465150   83607 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0315 22:57:39.465180   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHHostname
	I0315 22:57:39.468286   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.468689   83607 main.go:141] libmachine: (addons-097314) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:63:6b:cb", ip: ""} in network mk-addons-097314: {Iface:virbr1 ExpiryTime:2024-03-15 23:56:51 +0000 UTC Type:0 Mac:52:54:00:63:6b:cb Iaid: IPaddr:192.168.39.35 Prefix:24 Hostname:addons-097314 Clientid:01:52:54:00:63:6b:cb}
	I0315 22:57:39.468714   83607 main.go:141] libmachine: (addons-097314) DBG | domain addons-097314 has defined IP address 192.168.39.35 and MAC address 52:54:00:63:6b:cb in network mk-addons-097314
	I0315 22:57:39.468860   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHPort
	I0315 22:57:39.469037   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHKeyPath
	I0315 22:57:39.469205   83607 main.go:141] libmachine: (addons-097314) Calling .GetSSHUsername
	I0315 22:57:39.469368   83607 sshutil.go:53] new ssh client: &{IP:192.168.39.35 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/addons-097314/id_rsa Username:docker}
	I0315 22:57:39.604895   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.76221535s)
	I0315 22:57:39.604949   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.69230024s)
	I0315 22:57:39.604956   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.604969   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.605026   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.605037   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.605092   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (6.682479131s)
	I0315 22:57:39.605129   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (6.627109833s)
	I0315 22:57:39.605145   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.605150   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.605158   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.605156   83607 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (6.612158607s)
	I0315 22:57:39.605183   83607 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.611244009s)
	I0315 22:57:39.605200   83607 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 22:57:39.605158   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606182   83607 node_ready.go:35] waiting up to 6m0s for node "addons-097314" to be "Ready" ...
	I0315 22:57:39.606405   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606415   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606428   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606429   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606449   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606454   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606458   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606462   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606471   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606477   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606494   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606524   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606532   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606540   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606545   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606547   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606580   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606588   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606602   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:39.606609   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:39.606770   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606808   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606814   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.606904   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.606933   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.606940   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.607113   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.607144   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.607151   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.607236   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:39.607266   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:39.607273   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:39.683106   83607 node_ready.go:49] node "addons-097314" has status "Ready":"True"
	I0315 22:57:39.683143   83607 node_ready.go:38] duration metric: took 76.937145ms for node "addons-097314" to be "Ready" ...
	I0315 22:57:39.683155   83607 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 22:57:39.764002   83607 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace to be "Ready" ...
	I0315 22:57:40.235729   83607 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-097314" context rescaled to 1 replicas
	I0315 22:57:40.274084   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (7.22237558s)
	I0315 22:57:40.274124   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.266343388s)
	I0315 22:57:40.274146   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274161   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274146   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274224   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274485   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274500   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274522   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.274522   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274532   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274541   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274577   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274598   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.274627   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.274639   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.274827   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274867   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.274877   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274891   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.274895   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.274897   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:40.317503   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.317526   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.317826   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:40.317829   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.317852   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	W0315 22:57:40.317950   83607 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I0315 22:57:40.344548   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:40.344570   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:40.344851   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:40.344877   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.425224   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.332615062s)
	I0315 22:57:41.425293   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425306   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425307   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.935634226s)
	I0315 22:57:41.425354   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (7.861499081s)
	I0315 22:57:41.425381   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425395   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.782539953s)
	I0315 22:57:41.425405   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425412   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425420   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425359   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425497   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425553   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (6.961847461s)
	W0315 22:57:41.425598   83607 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 22:57:41.425628   83607 retry.go:31] will retry after 277.136751ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0315 22:57:41.425701   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (6.816473394s)
	I0315 22:57:41.425456   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.589361607s)
	I0315 22:57:41.425725   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425743   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.425757   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.425809   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.426176   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.426218   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.426226   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.426235   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.426243   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.426318   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.426327   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.426334   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.426342   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.426400   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.426423   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.426430   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.426437   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.426444   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.427714   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427770   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427798   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.427807   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.427821   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.427823   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427861   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427897   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427922   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.427934   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427971   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.428008   83607 addons.go:470] Verifying addon ingress=true in "addons-097314"
	I0315 22:57:41.428039   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.428065   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.428096   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.428108   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.430685   83607 out.go:177] * Verifying ingress addon...
	I0315 22:57:41.427977   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.427940   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.427958   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.427900   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.428345   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.429106   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.431997   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.432008   83607 addons.go:470] Verifying addon registry=true in "addons-097314"
	I0315 22:57:41.432027   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.432056   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:41.432061   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.432069   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:41.433443   83607 out.go:177] * Verifying registry addon...
	I0315 22:57:41.432061   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.432318   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.432329   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.432339   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:41.432346   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:41.432868   83607 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0315 22:57:41.435026   83607 addons.go:470] Verifying addon metrics-server=true in "addons-097314"
	I0315 22:57:41.435051   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.435085   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:41.436419   83607 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-097314 service yakd-dashboard -n yakd-dashboard
	
	I0315 22:57:41.435811   83607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0315 22:57:41.455662   83607 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0315 22:57:41.455685   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:41.455910   83607 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0315 22:57:41.455932   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:41.703714   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0315 22:57:41.771797   83607 pod_ready.go:102] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:41.940160   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:41.943061   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:42.464685   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:42.465131   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:42.965969   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:42.966038   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:43.478644   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:43.500872   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:43.695921   83607 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (4.230739627s)
	I0315 22:57:43.697584   83607 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0315 22:57:43.695892   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.658458721s)
	I0315 22:57:43.697652   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:43.697675   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:43.699190   83607 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0315 22:57:43.698033   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:43.698055   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:43.700508   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:43.700521   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:43.700532   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:43.700590   83607 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0315 22:57:43.700614   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0315 22:57:43.700827   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:43.700843   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:43.700853   83607 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-097314"
	I0315 22:57:43.702295   83607 out.go:177] * Verifying csi-hostpath-driver addon...
	I0315 22:57:43.700988   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:43.704428   83607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0315 22:57:43.722081   83607 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0315 22:57:43.722099   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:43.791575   83607 pod_ready.go:102] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:43.850975   83607 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0315 22:57:43.851001   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0315 22:57:43.941220   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:43.950479   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:43.981996   83607 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 22:57:43.982019   83607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0315 22:57:44.056877   83607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0315 22:57:44.214183   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:44.439731   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:44.443567   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:44.733133   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:44.885974   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.182184508s)
	I0315 22:57:44.886047   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:44.886070   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:44.886401   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:44.886447   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:44.886461   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:44.886470   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:44.886733   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:44.886733   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:44.886787   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:44.940004   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:44.948897   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:45.211742   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:45.442752   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:45.444635   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:45.715843   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:45.823043   83607 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.76611992s)
	I0315 22:57:45.823108   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:45.823122   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:45.823498   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:45.823523   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:45.823568   83607 main.go:141] libmachine: Making call to close driver server
	I0315 22:57:45.823583   83607 main.go:141] libmachine: (addons-097314) Calling .Close
	I0315 22:57:45.823588   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:45.823830   83607 main.go:141] libmachine: (addons-097314) DBG | Closing plugin on server side
	I0315 22:57:45.823862   83607 main.go:141] libmachine: Successfully made call to close driver server
	I0315 22:57:45.823875   83607 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 22:57:45.825805   83607 addons.go:470] Verifying addon gcp-auth=true in "addons-097314"
	I0315 22:57:45.827474   83607 out.go:177] * Verifying gcp-auth addon...
	I0315 22:57:45.829753   83607 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0315 22:57:45.834716   83607 pod_ready.go:102] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:45.841885   83607 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0315 22:57:45.841911   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:45.942313   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:45.948221   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:46.210568   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:46.271712   83607 pod_ready.go:97] pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.35 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-15 22:57:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:ni
l Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-15 22:57:34 +0000 UTC,FinishedAt:2024-03-15 22:57:45 +0000 UTC,ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e Started:0xc002d62fd0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0315 22:57:46.271744   83607 pod_ready.go:81] duration metric: took 6.507708877s for pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace to be "Ready" ...
	E0315 22:57:46.271757   83607 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-8nn6p" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-15 22:57:32 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.35 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-15 22:57:32 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns
State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-15 22:57:34 +0000 UTC,FinishedAt:2024-03-15 22:57:45 +0000 UTC,ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://96475f0d54d5592e5036907e1cf95175d403077c5f16d2c36570e2b148b6914e Started:0xc002d62fd0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0315 22:57:46.271766   83607 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace to be "Ready" ...
	I0315 22:57:46.335077   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:46.440637   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:46.443706   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:46.710234   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:46.834473   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:46.940072   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:46.942369   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:47.210840   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:47.333772   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:47.440251   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:47.443630   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:47.710180   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:47.833823   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:47.940283   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:47.942433   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:48.210678   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:48.278842   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:48.334933   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:48.440325   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:48.447654   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:48.710271   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:48.833985   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:48.950160   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:48.961050   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:49.435249   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:49.443544   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:49.447797   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:49.448454   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:49.709820   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:49.834013   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:49.940253   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:49.942675   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:50.210575   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:50.334659   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:50.440595   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:50.443562   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:50.709970   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:50.778614   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:50.834033   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:50.940901   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:50.942884   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:51.211031   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:51.333189   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:51.439659   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:51.442516   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:51.709672   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:51.833573   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:51.940334   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:51.941906   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:52.210988   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:52.332970   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:52.440292   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:52.447241   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:52.712891   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:52.778804   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:52.834041   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:52.940420   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:52.942743   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:53.209973   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:53.334147   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:53.440432   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:53.443242   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:53.710399   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:53.834424   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:53.939800   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:53.943022   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:54.211055   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:54.333970   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:54.439995   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:54.443382   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:54.710820   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:54.778832   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:54.834420   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:54.940293   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:54.946021   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:55.210366   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:55.333858   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:55.440638   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:55.448312   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:55.710378   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:55.834345   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:55.939750   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:55.942756   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:56.210876   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:56.335867   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:56.441109   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:56.443129   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:56.709774   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:56.834799   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:56.941065   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:56.942421   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:57.212188   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:57.279196   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:57.349715   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:57.441881   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:57.443747   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:57.711519   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:57.834827   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:57.940953   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:57.942928   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:58.210118   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:58.345401   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:58.439308   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:58.442174   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:58.711886   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:58.834764   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:58.941653   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:58.947477   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:59.211479   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:59.280754   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:57:59.334694   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:59.439999   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:59.443010   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:57:59.711158   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:57:59.833360   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:57:59.939532   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:57:59.943659   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:00.225340   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:00.333886   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:00.440853   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:00.445285   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:00.710882   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:00.851055   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:00.945036   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:00.952842   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:01.210726   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:01.333553   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:01.439598   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:01.454095   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:01.711171   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:01.784565   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:01.834368   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:01.943400   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:01.945945   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:02.210391   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:02.336471   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:02.439403   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:02.442526   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:02.710903   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:02.834603   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:02.939952   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:02.943218   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:03.211209   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:03.334048   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:03.441033   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:03.449532   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:03.711529   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:03.834179   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:03.942732   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:03.942866   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:04.210552   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:04.279302   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:04.333806   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:04.439876   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:04.442937   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:04.710088   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:04.833404   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:04.939474   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:04.942266   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:05.210691   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:05.333655   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:05.451009   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:05.453933   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:05.710707   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:05.833453   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:05.939731   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:05.942852   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:06.210740   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:06.455615   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:06.455968   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:06.457656   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:06.465016   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:06.710235   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:06.833409   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:06.939775   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:06.942694   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:07.211469   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:07.334043   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:07.440566   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:07.442166   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:07.710656   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:07.834633   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:07.940618   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:07.948558   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:08.210176   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:08.334231   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:08.439773   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:08.442689   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:08.711489   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:08.779008   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:08.835175   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:08.941304   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:08.943495   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:09.209760   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:09.333401   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:09.440012   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:09.443264   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:09.710473   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:09.834018   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:09.939944   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:09.942490   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:10.232335   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:10.505039   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:10.506766   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:10.507343   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:10.710506   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:10.780066   83607 pod_ready.go:102] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"False"
	I0315 22:58:10.833533   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:10.940543   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:10.941902   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:11.210702   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:11.334222   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:11.440752   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:11.443029   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:11.710085   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:11.834136   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:11.941472   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:11.942965   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:12.210533   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:12.334071   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:12.452469   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:12.452956   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:12.710983   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:12.777592   83607 pod_ready.go:92] pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.777620   83607 pod_ready.go:81] duration metric: took 26.505842389s for pod "coredns-5dd5756b68-p6s6d" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.777632   83607 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.782954   83607 pod_ready.go:92] pod "etcd-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.782980   83607 pod_ready.go:81] duration metric: took 5.340052ms for pod "etcd-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.782992   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.790890   83607 pod_ready.go:92] pod "kube-apiserver-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.790911   83607 pod_ready.go:81] duration metric: took 7.911535ms for pod "kube-apiserver-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.790922   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.795880   83607 pod_ready.go:92] pod "kube-controller-manager-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.795905   83607 pod_ready.go:81] duration metric: took 4.976053ms for pod "kube-controller-manager-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.795920   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zspm2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.801086   83607 pod_ready.go:92] pod "kube-proxy-zspm2" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:12.801109   83607 pod_ready.go:81] duration metric: took 5.181246ms for pod "kube-proxy-zspm2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.801120   83607 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:12.833350   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:12.941755   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:12.944090   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:13.175879   83607 pod_ready.go:92] pod "kube-scheduler-addons-097314" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:13.175907   83607 pod_ready.go:81] duration metric: took 374.779473ms for pod "kube-scheduler-addons-097314" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.175918   83607 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-69cf46c98-spvr4" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.209804   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:13.333992   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:13.440801   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:13.442768   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:13.575501   83607 pod_ready.go:92] pod "metrics-server-69cf46c98-spvr4" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:13.575533   83607 pod_ready.go:81] duration metric: took 399.607151ms for pod "metrics-server-69cf46c98-spvr4" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.575546   83607 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-gpjp2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.713035   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:13.836196   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:13.942723   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:13.942850   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:13.976345   83607 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-gpjp2" in "kube-system" namespace has status "Ready":"True"
	I0315 22:58:13.976373   83607 pod_ready.go:81] duration metric: took 400.819164ms for pod "nvidia-device-plugin-daemonset-gpjp2" in "kube-system" namespace to be "Ready" ...
	I0315 22:58:13.976392   83607 pod_ready.go:38] duration metric: took 34.293217536s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 22:58:13.976411   83607 api_server.go:52] waiting for apiserver process to appear ...
	I0315 22:58:13.976491   83607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 22:58:14.005538   83607 api_server.go:72] duration metric: took 41.803012887s to wait for apiserver process to appear ...
	I0315 22:58:14.005581   83607 api_server.go:88] waiting for apiserver healthz status ...
	I0315 22:58:14.005613   83607 api_server.go:253] Checking apiserver healthz at https://192.168.39.35:8443/healthz ...
	I0315 22:58:14.010094   83607 api_server.go:279] https://192.168.39.35:8443/healthz returned 200:
	ok
	I0315 22:58:14.011270   83607 api_server.go:141] control plane version: v1.28.4
	I0315 22:58:14.011293   83607 api_server.go:131] duration metric: took 5.70367ms to wait for apiserver health ...
	I0315 22:58:14.011303   83607 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 22:58:14.181944   83607 system_pods.go:59] 18 kube-system pods found
	I0315 22:58:14.181982   83607 system_pods.go:61] "coredns-5dd5756b68-p6s6d" [7caaa4dc-1836-4020-b722-90edda2d212b] Running
	I0315 22:58:14.181990   83607 system_pods.go:61] "csi-hostpath-attacher-0" [ba76f6d6-961f-4d78-96ee-b5169360170f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 22:58:14.181996   83607 system_pods.go:61] "csi-hostpath-resizer-0" [a08ba8ff-3283-4356-a495-7ebfd59456b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 22:58:14.182004   83607 system_pods.go:61] "csi-hostpathplugin-5g6gq" [e6164251-098a-4dfd-9978-fdc4963327c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 22:58:14.182010   83607 system_pods.go:61] "etcd-addons-097314" [20de319c-896a-478d-943a-7a0c85f67b63] Running
	I0315 22:58:14.182014   83607 system_pods.go:61] "kube-apiserver-addons-097314" [6a75a924-4c87-47a7-9100-c16d67666cd1] Running
	I0315 22:58:14.182017   83607 system_pods.go:61] "kube-controller-manager-addons-097314" [a38fcc0b-c3ff-4356-9fc1-ea4518126611] Running
	I0315 22:58:14.182022   83607 system_pods.go:61] "kube-ingress-dns-minikube" [beb46bcd-db3c-4022-9b99-e6a29dbf5543] Running
	I0315 22:58:14.182028   83607 system_pods.go:61] "kube-proxy-zspm2" [11f770a3-08d0-4140-a786-578f0feee2bd] Running
	I0315 22:58:14.182032   83607 system_pods.go:61] "kube-scheduler-addons-097314" [bd1f348f-d044-462f-8e0d-2351e49ef9fb] Running
	I0315 22:58:14.182037   83607 system_pods.go:61] "metrics-server-69cf46c98-spvr4" [673c996b-9f13-4f55-a0da-458b3f9d201d] Running
	I0315 22:58:14.182042   83607 system_pods.go:61] "nvidia-device-plugin-daemonset-gpjp2" [2e033f82-a2e7-42b2-9052-980b0046daa3] Running
	I0315 22:58:14.182046   83607 system_pods.go:61] "registry-7bpx6" [f08323c1-5f57-4428-ab07-fa1dd1960c2c] Running
	I0315 22:58:14.182056   83607 system_pods.go:61] "registry-proxy-bp44p" [03d529e0-4bcd-4fa9-a95b-2921fe26e9cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 22:58:14.182067   83607 system_pods.go:61] "snapshot-controller-58dbcc7b99-gm4wh" [bee3baf8-f3b4-4a5d-8724-f2c5356c9d59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.182084   83607 system_pods.go:61] "snapshot-controller-58dbcc7b99-wvz4s" [0d796667-8f3f-4044-bb24-25cd1713ebc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.182088   83607 system_pods.go:61] "storage-provisioner" [9c400f5a-33f3-460d-a136-9d1ff87f0009] Running
	I0315 22:58:14.182092   83607 system_pods.go:61] "tiller-deploy-7b677967b9-5s4t7" [159edcb2-34c6-484f-b9c1-7b4d9f4cc492] Running
	I0315 22:58:14.182101   83607 system_pods.go:74] duration metric: took 170.791567ms to wait for pod list to return data ...
	I0315 22:58:14.182112   83607 default_sa.go:34] waiting for default service account to be created ...
	I0315 22:58:14.210391   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:14.334276   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:14.375271   83607 default_sa.go:45] found service account: "default"
	I0315 22:58:14.375297   83607 default_sa.go:55] duration metric: took 193.176835ms for default service account to be created ...
	I0315 22:58:14.375306   83607 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 22:58:14.439344   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:14.442508   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:14.777594   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:14.783410   83607 system_pods.go:86] 18 kube-system pods found
	I0315 22:58:14.783434   83607 system_pods.go:89] "coredns-5dd5756b68-p6s6d" [7caaa4dc-1836-4020-b722-90edda2d212b] Running
	I0315 22:58:14.783442   83607 system_pods.go:89] "csi-hostpath-attacher-0" [ba76f6d6-961f-4d78-96ee-b5169360170f] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0315 22:58:14.783448   83607 system_pods.go:89] "csi-hostpath-resizer-0" [a08ba8ff-3283-4356-a495-7ebfd59456b6] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0315 22:58:14.783456   83607 system_pods.go:89] "csi-hostpathplugin-5g6gq" [e6164251-098a-4dfd-9978-fdc4963327c3] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0315 22:58:14.783461   83607 system_pods.go:89] "etcd-addons-097314" [20de319c-896a-478d-943a-7a0c85f67b63] Running
	I0315 22:58:14.783466   83607 system_pods.go:89] "kube-apiserver-addons-097314" [6a75a924-4c87-47a7-9100-c16d67666cd1] Running
	I0315 22:58:14.783470   83607 system_pods.go:89] "kube-controller-manager-addons-097314" [a38fcc0b-c3ff-4356-9fc1-ea4518126611] Running
	I0315 22:58:14.783473   83607 system_pods.go:89] "kube-ingress-dns-minikube" [beb46bcd-db3c-4022-9b99-e6a29dbf5543] Running
	I0315 22:58:14.783477   83607 system_pods.go:89] "kube-proxy-zspm2" [11f770a3-08d0-4140-a786-578f0feee2bd] Running
	I0315 22:58:14.783481   83607 system_pods.go:89] "kube-scheduler-addons-097314" [bd1f348f-d044-462f-8e0d-2351e49ef9fb] Running
	I0315 22:58:14.783485   83607 system_pods.go:89] "metrics-server-69cf46c98-spvr4" [673c996b-9f13-4f55-a0da-458b3f9d201d] Running
	I0315 22:58:14.783489   83607 system_pods.go:89] "nvidia-device-plugin-daemonset-gpjp2" [2e033f82-a2e7-42b2-9052-980b0046daa3] Running
	I0315 22:58:14.783492   83607 system_pods.go:89] "registry-7bpx6" [f08323c1-5f57-4428-ab07-fa1dd1960c2c] Running
	I0315 22:58:14.783497   83607 system_pods.go:89] "registry-proxy-bp44p" [03d529e0-4bcd-4fa9-a95b-2921fe26e9cc] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0315 22:58:14.783504   83607 system_pods.go:89] "snapshot-controller-58dbcc7b99-gm4wh" [bee3baf8-f3b4-4a5d-8724-f2c5356c9d59] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.783510   83607 system_pods.go:89] "snapshot-controller-58dbcc7b99-wvz4s" [0d796667-8f3f-4044-bb24-25cd1713ebc2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0315 22:58:14.783514   83607 system_pods.go:89] "storage-provisioner" [9c400f5a-33f3-460d-a136-9d1ff87f0009] Running
	I0315 22:58:14.783518   83607 system_pods.go:89] "tiller-deploy-7b677967b9-5s4t7" [159edcb2-34c6-484f-b9c1-7b4d9f4cc492] Running
	I0315 22:58:14.783525   83607 system_pods.go:126] duration metric: took 408.213527ms to wait for k8s-apps to be running ...
	I0315 22:58:14.783533   83607 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 22:58:14.783576   83607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 22:58:14.820486   83607 system_svc.go:56] duration metric: took 36.936432ms WaitForService to wait for kubelet
	I0315 22:58:14.820524   83607 kubeadm.go:576] duration metric: took 42.618005202s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 22:58:14.820554   83607 node_conditions.go:102] verifying NodePressure condition ...
	I0315 22:58:14.823730   83607 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 22:58:14.823757   83607 node_conditions.go:123] node cpu capacity is 2
	I0315 22:58:14.823770   83607 node_conditions.go:105] duration metric: took 3.209969ms to run NodePressure ...
	I0315 22:58:14.823780   83607 start.go:240] waiting for startup goroutines ...
	I0315 22:58:14.833729   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:14.939495   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:14.942498   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:15.210593   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:15.335294   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:15.441069   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:15.445348   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:15.710855   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:15.834496   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:15.940347   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:15.943835   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:16.210944   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:16.334244   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:16.439611   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:16.442700   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0315 22:58:16.710226   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:16.834432   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:16.939688   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:16.942372   83607 kapi.go:107] duration metric: took 35.506553508s to wait for kubernetes.io/minikube-addons=registry ...
	I0315 22:58:17.211777   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:17.334461   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:17.439768   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:17.711466   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:17.834089   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:17.940041   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:18.211347   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:18.334079   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:18.440635   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:18.711596   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:18.835047   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:18.940326   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:19.211203   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:19.334547   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:19.440196   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:19.710493   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:19.834307   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:19.941304   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:20.211025   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:20.334455   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:20.439513   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:20.745109   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:20.833547   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:20.940205   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:21.210662   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:21.336179   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:21.441506   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:21.710258   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:21.834669   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:21.940281   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:22.211004   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:22.336221   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:22.440703   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:22.710246   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:23.179889   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:23.180081   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:23.217461   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:23.334446   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:23.441341   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:23.709951   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:23.834880   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:23.941062   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:24.211030   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:24.334678   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:24.439502   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:24.711649   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:24.834927   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:24.939987   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:25.211261   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:25.334233   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:25.441798   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:25.710319   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:25.832945   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:25.940918   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:26.210563   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:26.334280   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:26.440043   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:26.710598   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:26.834048   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:26.941210   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:27.213520   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:27.337485   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:27.440770   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:27.711089   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:27.832811   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:27.940670   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:28.641447   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:28.642276   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:28.642465   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:28.712660   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:28.833345   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:28.940085   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:29.210768   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:29.333670   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:29.439784   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:29.710788   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:29.834192   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:29.940064   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:30.211731   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:30.334462   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:30.444604   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:30.711206   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:30.834411   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:30.942423   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:31.210363   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:31.334403   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:31.440499   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:31.713401   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:31.836146   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:31.940426   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:32.213509   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:32.341040   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:32.451987   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:32.710234   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:32.835528   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:32.940649   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:33.211893   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:33.333722   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:33.440357   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:33.713352   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:33.837831   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:33.939972   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:34.211653   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:34.333832   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:34.440191   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:34.710349   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:34.834052   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:35.356354   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:35.362516   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:35.367781   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:35.445163   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:35.716034   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:35.835001   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:35.940757   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:36.212798   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:36.334388   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:36.439736   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:36.710946   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:36.834659   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:36.944957   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:37.210153   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:37.333673   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:37.440446   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:37.711051   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:37.833946   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:37.940377   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:38.212418   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:38.334817   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:38.442713   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:38.711264   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:38.835141   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:38.943985   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:39.211276   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:39.334565   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:39.440233   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:39.716250   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:39.834029   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:39.940077   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:40.215520   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:40.342226   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:40.440739   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:40.716758   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:40.833980   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:40.941983   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:41.211214   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:41.336784   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:41.440592   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:41.710815   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:41.833809   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:41.945865   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:42.212258   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:42.336092   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:42.440013   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:42.711188   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:42.833776   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:42.940217   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:43.212329   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:43.833808   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:43.842294   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:43.849549   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:43.863811   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:43.940665   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:44.210598   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:44.334066   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:44.440226   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:44.709727   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:44.833911   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:44.941529   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:45.212096   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:45.334072   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:45.439636   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:45.711526   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:45.833997   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:45.941054   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:46.211035   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:46.334394   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:46.439882   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:46.721125   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:46.833922   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:46.940509   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:47.210829   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:47.334527   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:47.440801   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:47.710529   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:47.832966   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:47.940904   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:48.210496   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:48.333346   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:48.439560   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:48.711305   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:48.834963   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:48.940050   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:49.210703   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:49.336040   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:49.444316   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:50.063252   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:50.063316   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:50.069865   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:50.214339   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:50.334052   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:50.441040   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:50.709683   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:50.833692   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:50.939920   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:51.210100   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:51.333887   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:51.440435   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:51.710930   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:51.834578   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:51.939634   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:52.210952   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:52.335621   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:52.440153   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:52.710733   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:52.834137   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:53.112211   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:53.210149   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:53.334629   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:53.441743   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:53.711461   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:53.835593   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:53.943247   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:54.214353   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:54.334227   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:54.439415   83607 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0315 22:58:54.711302   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:54.833885   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:54.940022   83607 kapi.go:107] duration metric: took 1m13.507147774s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0315 22:58:55.210033   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0315 22:58:55.336941   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:55.711193   83607 kapi.go:107] duration metric: took 1m12.006762799s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0315 22:58:55.834447   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:56.334245   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:56.834100   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:57.333371   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:57.930329   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:58.334307   83607 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0315 22:58:58.834898   83607 kapi.go:107] duration metric: took 1m13.005144765s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0315 22:58:58.836671   83607 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-097314 cluster.
	I0315 22:58:58.838043   83607 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0315 22:58:58.839360   83607 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0315 22:58:58.840666   83607 out.go:177] * Enabled addons: ingress-dns, storage-provisioner, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, helm-tiller, metrics-server, inspektor-gadget, yakd, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0315 22:58:58.841909   83607 addons.go:505] duration metric: took 1m26.639340711s for enable addons: enabled=[ingress-dns storage-provisioner cloud-spanner nvidia-device-plugin storage-provisioner-rancher helm-tiller metrics-server inspektor-gadget yakd volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0315 22:58:58.841948   83607 start.go:245] waiting for cluster config update ...
	I0315 22:58:58.841976   83607 start.go:254] writing updated cluster config ...
	I0315 22:58:58.842243   83607 ssh_runner.go:195] Run: rm -f paused
	I0315 22:58:58.893207   83607 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0315 22:58:58.894958   83607 out.go:177] * Done! kubectl is now configured to use "addons-097314" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.915219696Z" level=debug msg="Request: &AttachRequest{ContainerId:6e2d271c7207ac5d8c93b6d9197d4e148cafeb09f8aa705b77a932a868d75162,Stdin:true,Tty:true,Stdout:true,Stderr:true,}" file="otel-collector/interceptors.go:62" id=6bd16631-388c-4275-ac27-f63ddbff5082 name=/runtime.v1.RuntimeService/Attach
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.915475716Z" level=debug msg="Response error: unable to prepare attach endpoint" file="otel-collector/interceptors.go:71" id=6bd16631-388c-4275-ac27-f63ddbff5082 name=/runtime.v1.RuntimeService/Attach
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.924370101Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:6e2d271c7207ac5d8c93b6d9197d4e148cafeb09f8aa705b77a932a868d75162,Verbose:false,}" file="otel-collector/interceptors.go:62" id=dcea9de1-0b10-4ece-88dc-d93eec7a56f3 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.924496151Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:6e2d271c7207ac5d8c93b6d9197d4e148cafeb09f8aa705b77a932a868d75162,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1710543560748215273,StartedAt:1710543560787785291,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/alpine/helm:2.16.3,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad2fde7-d630-46a1-a59e-ccbd85431dd2,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6235,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/8ad2fde7-d630-46a1-a59e-ccbd85431dd2/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/8ad2fde7-d630-46a1-a59e-ccbd85431dd2/containers/helm-test/7324893c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/8ad2fde7-d630-46a1-a59e-ccbd85431dd2/volumes/kubernetes.io~projected/kube-api-access-2487b,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_helm-test_8ad2fde7-d630-46a1-a59e-ccbd85431dd2/helm-test/0.log,Resources:&ContainerResources{Linu
x:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:2,MemoryLimitInBytes:0,OomScoreAdj:1000,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=dcea9de1-0b10-4ece-88dc-d93eec7a56f3 name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.967421314Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=56fb5284-fc2f-426d-8e4b-aaf7c4626e7b name=/runtime.v1.RuntimeService/Version
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.967492378Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=56fb5284-fc2f-426d-8e4b-aaf7c4626e7b name=/runtime.v1.RuntimeService/Version
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.971451067Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c5fd2c7-9692-4e18-b558-93940f97e59f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.972555617Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710543560972525055,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:477542,},InodesUsed:&UInt64Value{Value:184,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c5fd2c7-9692-4e18-b558-93940f97e59f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.973281567Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0b2ca26-9fdf-43ad-a90a-bcd542eb7d7a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.973340497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0b2ca26-9fdf-43ad-a90a-bcd542eb7d7a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.974369309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e2d271c7207ac5d8c93b6d9197d4e148cafeb09f8aa705b77a932a868d75162,PodSandboxId:d473a55918364fa717f54f80fdc2d9f30953d2cddcdeeee230e192c9d9f705ef,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_RUNNING,CreatedAt:1710543560698412223,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad2fde7-d630-46a1-a59e-ccbd85431dd2,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6235,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35b01fecfd3c0c9a3588500dabcb97634b9b0c9e03ddf566520ee9a45427d7f2,PodSandboxId:9d8e093e0f3dae56bda55a7f350f5a2969c52a76b5d858680bd4df0ea941089f,Metadata:&ContainerMetadata{Name:gadget,Attempt:2,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:81f48f8d24e42642898d5669b6926805425c3577187c161e14dcdd4f857e1f8e,State:CONTAINER_EXITED,CreatedAt:1710543539714916740,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-bzqbb,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f316897a-14a4-4d60-a680-3ed2dd3166ee,},Annotations:map[string]string{io.kubernetes.container.hash: ef27efe9,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"
/cleanup\"]}},io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d,PodSandboxId:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710543538005671829,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,},Annotations:map[string]st
ring{io.kubernetes.container.hash: a64da723,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4786fa6f76feb0e4695c3767f2d295550c887027806d4e640b1ed2c33852dcf6,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1710543535199193409,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes
.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: 88252db6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5,PodSandboxId:6e3cb3b3c63ce8095d8eba3124b68f419d2f42d7944fa38564c2d3130f212793,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1710543533787523324,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-c
ontroller-76dc478dd8-ql9pj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f80d4d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:06f8bb05f3b3a57fd0fa5e2d17e260b81906a2efe2e7cf17e6301cfed1328a23,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-stora
ge/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1710543526902595009,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: d9bac84e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3826f24a4323581b8ad3132e05e89901b91df5312be3314c170b31bc98edf4,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageS
pec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1710543525212790897,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: becfc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c825863e6ffd1c62a1bca156e971d19e235efde104336ba3d9b05b8b479bb5,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:hostpath,
Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1710543524299376167,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: 88f7783,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655c51734c4819649525feff7a6fa21900cee788c2bb050fc5750f78302d8675,
PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1710543522057184797,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: 321e75e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1175da
4d0dd405e88935f02d8f4debe8afbb09b7b074c429a0bfe0c76e26ffd,PodSandboxId:e5db03df72313af84a79fd607118c150525ce226c8d0c56ee5787df05852842a,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1710543519933453414,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba76f6d6-961f-4d78-96ee-b5169360170f,},Annotations:map[string]string{io.kubernetes.container.hash: b310b014,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30
,},},&Container{Id:1da0b76fe97da4482bb4d7523b797b96c494d64cf64033a99db52310b744fc51,PodSandboxId:addfe5861da3270c68257ebb878a44bf3a6521f26e8211087e85bfedfee8739c,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1710543518362506685,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ba8ff-3283-4356-a495-7ebfd59456b6,},Annotations:map[string]string{io.kubernetes.container.hash: cf735ee8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.term
inationGracePeriod: 30,},},&Container{Id:e2ccb85347428904b0d886921c99948947d95eda789a6d68969002906fadff6f,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1710543516933350353,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: ffeceb47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf304fd980426d954707f8a64d861b997836692164cd559753666f089ad6d9eb,PodSandboxId:570bc8c2c75ef357c570d661fe907391800187b9debbbaefcbed4e12e35765e8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710543515552600137,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-f89sv,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8fb6ef82-c69f-4938-9750-703821777bae,},Annotations:map[string]string{io.kubernetes.container.hash: 34101aa2,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:94fac4341225af4159fb24f9bda7b6f9b91e0bb26e3b9407c87986356a38477c,PodSandboxId:4445e761fa88d085b175f309b40fda36f32d52359aad1fc66df5d30f409405d1,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710543515445253564,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-89rpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7c3085eb-ecc4-46f2-87d6-e598137d5c05,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 19baefef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a38da70685745709b0a8add051c14a539203fec616b87637676631ddb9d141,PodSandboxId:4b42b87d89dd1b4d09f683a5a1f71dfa6a5b02c4ce3abcadc159b7b321321839,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710543512471834636,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gm4wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee3baf8-f3
b4-4a5d-8724-f2c5356c9d59,},Annotations:map[string]string{io.kubernetes.container.hash: dd1bef0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ffc2a54ba4a4570cea0ffee96ae3627173ef2355c9f17ed39fc4252752c479c,PodSandboxId:c018dddd54c691afb0b93342568fa0dbe18ce5c7c3d77e0e1059032240ad3496,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710543512373947994,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-tdwdz,io.kubernetes.pod.namespace: ingress-ng
inx,io.kubernetes.pod.uid: 486fc903-f174-4206-91d6-a9744dcbfb23,},Annotations:map[string]string{io.kubernetes.container.hash: 2954f0a9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e324c32f75a06a80a991cb9e626e87d6fc5ebb8a8bde9cecfc8534894d2a100,PodSandboxId:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710543503304620706,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-das
hboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,},Annotations:map[string]string{io.kubernetes.container.hash: e0e945f1,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d33b8f4de434d9d48ca9c230669338c8e02e74d17e90c51e7a5cfb18e57876f,PodSandboxId:c28cfa3974156fafaf7c716a0abc13a73d3ef225a67da9186b9a779fd42fc92e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710543497943649190,Labels
:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-wvz4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d796667-8f3f-4044-bb24-25cd1713ebc2,},Annotations:map[string]string{io.kubernetes.container.hash: 124ac715,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91,PodSandboxId:19ac46d8a10ab197c43163c81f8923ce70b680ebd7839c59ce15dbd8e3016081,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a
,State:CONTAINER_RUNNING,CreatedAt:1710543490970760534,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb46bcd-db3c-4022-9b99-e6a29dbf5543,},Annotations:map[string]string{io.kubernetes.container.hash: 73a0034,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1154b624dfb6077d08bbb09a92bf82a501f0afeafbbbda30d4361796faf7594a,PodSandboxId:b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecif
iedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1710543481122731057,Labels:map[string]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-5s4t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159edcb2-34c6-484f-b9c1-7b4d9f4cc492,},Annotations:map[string]string{io.kubernetes.container.hash: 887abf11,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85,PodSandboxId:00183462ad3c8237be611446ea62596a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&ContainerMetadata{Name:metrics-ser
ver,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710543478088923910,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,},Annotations:map[string]string{io.kubernetes.container.hash: 667274d8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be174bf553d958fbd2d852024f00e8cc1e7754ce8823c0
fe85df5c0badcb21a,PodSandboxId:23b937b80961f026935619222061a45c3b5a3280a1f308b3f7a3ac946e41a309,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Image:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35eab485356b42d23b307a833f61565766d6421917fd7176f994c3fc04555a2c,State:CONTAINER_RUNNING,CreatedAt:1710543476235540955,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6548d5df46-mb9fc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e19dae8-0109-477d-a76f-5805ec456869,},Annotations:map[string]string{io.kubernetes.container.hash: a862de7e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78,PodSandboxId:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710543462474927371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{io.kubernetes.container.hash: 506fb218,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termi
nationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23,PodSandboxId:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710543453910561060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,},Annotations:map[string]string{io.kubernetes.container.hash: 98b00e74,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name
\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3,PodSandboxId:5ef770e5867f87e50a902a6e9baaa2f8b75ab65acbe55931f3fa31caedb55e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710543452759732400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4
140-a786-578f0feee2bd,},Annotations:map[string]string{io.kubernetes.container.hash: f8e87be9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05e642c2e04d68f8ddb17d589ac545b7d6e6455cc4cbb87ea05be405497d75c,PodSandboxId:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710543433858697290,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9
790ed80546000baa27b620fb3443e56,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8e1034ff660949fd2f7a2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a,PodSandboxId:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710543433816571257,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 2f963277,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e,PodSandboxId:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710543433886749379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,},Annotations:map[string]string{
io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939,PodSandboxId:d02d2c9bfcd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710543433785318568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,},Annotations:map[string]string{io.kubernetes.con
tainer.hash: 771cd21a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0b2ca26-9fdf-43ad-a90a-bcd542eb7d7a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.990626408Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=57ddaba7-cf7e-4d85-807c-c2bcae269d46 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.991182483Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:d473a55918364fa717f54f80fdc2d9f30953d2cddcdeeee230e192c9d9f705ef,Metadata:&PodSandboxMetadata{Name:helm-test,Uid:8ad2fde7-d630-46a1-a59e-ccbd85431dd2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543557345524655,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad2fde7-d630-46a1-a59e-ccbd85431dd2,run: helm-test,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:59:17.027009738Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&PodSandboxMetadata{Name:gcp-auth-7d69788767-l2z4d,Uid:727dcb11-15e2-441c-a762-621a8942accd,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543530033299962,La
bels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: 7d69788767,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:45.818401039Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6e3cb3b3c63ce8095d8eba3124b68f419d2f42d7944fa38564c2d3130f212793,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-76dc478dd8-ql9pj,Uid:361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543525549416508,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-76dc478dd8-ql9pj,io.kubernetes.pod.namespace: ingress-
nginx,io.kubernetes.pod.uid: 361168d7-9d5a-4edc-9e63-9ec1f7e9b7f5,pod-template-hash: 76dc478dd8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:41.319780204Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:addfe5861da3270c68257ebb878a44bf3a6521f26e8211087e85bfedfee8739c,Metadata:&PodSandboxMetadata{Name:csi-hostpath-resizer-0,Uid:a08ba8ff-3283-4356-a495-7ebfd59456b6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543464292165317,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-resizer,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-resizer-dd9fcd54,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ba8ff-3283-4356-a495-7ebfd59456b6,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-resizer-0,},Annotations:map[string]strin
g{kubernetes.io/config.seen: 2024-03-15T22:57:43.687324287Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&PodSandboxMetadata{Name:csi-hostpathplugin-5g6gq,Uid:e6164251-098a-4dfd-9978-fdc4963327c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543464134521090,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/component: plugin,app.kubernetes.io/instance: hostpath.csi.k8s.io,app.kubernetes.io/name: csi-hostpathplugin,app.kubernetes.io/part-of: csi-driver-host-path,controller-revision-hash: 78bf95c75d,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,kubernetes.io/minikube-addons: csi-hostpath-driver,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:43.577445163Z,kubernetes.io/c
onfig.source: api,},RuntimeHandler:,},&PodSandbox{Id:e5db03df72313af84a79fd607118c150525ce226c8d0c56ee5787df05852842a,Metadata:&PodSandboxMetadata{Name:csi-hostpath-attacher-0,Uid:ba76f6d6-961f-4d78-96ee-b5169360170f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543464007628678,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,app.kubernetes.io/name: csi-hostpath-attacher,apps.kubernetes.io/pod-index: 0,controller-revision-hash: csi-hostpath-attacher-7784d6d6ff,io.kubernetes.container.name: POD,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba76f6d6-961f-4d78-96ee-b5169360170f,kubernetes.io/minikube-addons: csi-hostpath-driver,statefulset.kubernetes.io/pod-name: csi-hostpath-attacher-0,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:43.471431613Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9d8e093e0f3dae56bda55a7f350f5a2969c52a76b5d858680bd4df0ea941089f,M
etadata:&PodSandboxMetadata{Name:gadget-bzqbb,Uid:f316897a-14a4-4d60-a680-3ed2dd3166ee,Namespace:gadget,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543461687321640,Labels:map[string]string{controller-revision-hash: 5d575bd898,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gadget-bzqbb,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: f316897a-14a4-4d60-a680-3ed2dd3166ee,k8s-app: gadget,pod-template-generation: 1,},Annotations:map[string]string{container.apparmor.security.beta.kubernetes.io/gadget: unconfined,inspektor-gadget.kinvolk.io/option-hook-mode: auto,kubernetes.io/config.seen: 2024-03-15T22:57:40.682806979Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4445e761fa88d085b175f309b40fda36f32d52359aad1fc66df5d30f409405d1,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-78b46b4d5c-89rpj,Uid:7c3085eb-ecc4-46f2-87d6-e598137d5c05,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543461604267155,Labels:map[string]string{app: local-path
-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-89rpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7c3085eb-ecc4-46f2-87d6-e598137d5c05,pod-template-hash: 78b46b4d5c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:40.041885383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c28cfa3974156fafaf7c716a0abc13a73d3ef225a67da9186b9a779fd42fc92e,Metadata:&PodSandboxMetadata{Name:snapshot-controller-58dbcc7b99-wvz4s,Uid:0d796667-8f3f-4044-bb24-25cd1713ebc2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543461008504871,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-wvz4s,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d796667-8f3f-4044-bb24-25cd1713ebc2,pod-template-hash: 58dbcc7b99,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:40.5
53151627Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4b42b87d89dd1b4d09f683a5a1f71dfa6a5b02c4ce3abcadc159b7b321321839,Metadata:&PodSandboxMetadata{Name:snapshot-controller-58dbcc7b99-gm4wh,Uid:bee3baf8-f3b4-4a5d-8724-f2c5356c9d59,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543460883527852,Labels:map[string]string{app: snapshot-controller,io.kubernetes.container.name: POD,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gm4wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee3baf8-f3b4-4a5d-8724-f2c5356c9d59,pod-template-hash: 58dbcc7b99,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:40.532396227Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-rb286,Uid:e8f245e4-2a42-4c1e-bc01-a560ebc55844,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543460861
810172,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:40.380293549Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9c400f5a-33f3-460d-a136-9d1ff87f0009,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543460399964208,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33
f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-15T22:57:39.660846505Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:19ac46d8a10ab197c43163c81f8923ce70b680ebd7839c59ce15dbd8e3016081,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid
:beb46bcd-db3c-4022-9b99-e6a29dbf5543,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543459411780486,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb46bcd-db3c-4022-9b99-e6a29dbf5543,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPoli
cy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-03-15T22:57:38.791275383Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794,Metadata:&PodSandboxMetadata{Name:tiller-deploy-7b677967b9-5s4t7,Uid:159edcb2-34c6-484f-b9c1-7b4d9f4cc492,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543459288647947,Labels:map[string]string{app: helm,io.kubernetes.container.name: POD,io.kubernetes.pod.name: tiller-deploy-7b677967b9-5s4t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159edcb2-34c6-484f-b9c1-7b4d9f4cc492,name: tiller,pod-template-hash: 7b677967b9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:38.668764268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:00183462ad3c8237be611446ea6259
6a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&PodSandboxMetadata{Name:metrics-server-69cf46c98-spvr4,Uid:673c996b-9f13-4f55-a0da-458b3f9d201d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543458985874052,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,k8s-app: metrics-server,pod-template-hash: 69cf46c98,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:38.667209427Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:23b937b80961f026935619222061a45c3b5a3280a1f308b3f7a3ac946e41a309,Metadata:&PodSandboxMetadata{Name:cloud-spanner-emulator-6548d5df46-mb9fc,Uid:9e19dae8-0109-477d-a76f-5805ec456869,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543457773330567,Labels:map[string]string{app: cloud-spanner-emulator,io.kubernetes.container.name: POD,io.kubernetes.pod.name: cl
oud-spanner-emulator-6548d5df46-mb9fc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e19dae8-0109-477d-a76f-5805ec456869,pod-template-hash: 6548d5df46,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:37.461362007Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-p6s6d,Uid:7caaa4dc-1836-4020-b722-90edda2d212b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543452384710315,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:32.078552449Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5ef770e5867f87e50a902a6e9baaa2f8b75ab
65acbe55931f3fa31caedb55e9b,Metadata:&PodSandboxMetadata{Name:kube-proxy-zspm2,Uid:11f770a3-08d0-4140-a786-578f0feee2bd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543452169670744,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4140-a786-578f0feee2bd,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:31.842645141Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:d02d2c9bfcd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-097314,Uid:cc0a24d7dad3447463c10be999460f46,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543433601927300,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addo
ns-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.35:8443,kubernetes.io/config.hash: cc0a24d7dad3447463c10be999460f46,kubernetes.io/config.seen: 2024-03-15T22:57:13.123333594Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-097314,Uid:9790ed80546000baa27b620fb3443e56,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543433598954130,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9790ed80546000baa27b620fb3443e56,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 9790
ed80546000baa27b620fb3443e56,kubernetes.io/config.seen: 2024-03-15T22:57:13.123334588Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-097314,Uid:8c603c33ec844a8d77f6277024bcd906,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543433595695356,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8c603c33ec844a8d77f6277024bcd906,kubernetes.io/config.seen: 2024-03-15T22:57:13.123335394Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&PodSandboxMetadata{Name:etcd-addons-097314,Uid:5c71846b6b3b995488933cf
77e54c962,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543433586458968,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.35:2379,kubernetes.io/config.hash: 5c71846b6b3b995488933cf77e54c962,kubernetes.io/config.seen: 2024-03-15T22:57:13.123330382Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=57ddaba7-cf7e-4d85-807c-c2bcae269d46 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.992093487Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c2d2fc58-e5d5-40d2-b2b6-79895c27e38f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.992175372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c2d2fc58-e5d5-40d2-b2b6-79895c27e38f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.992843192Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6e2d271c7207ac5d8c93b6d9197d4e148cafeb09f8aa705b77a932a868d75162,PodSandboxId:d473a55918364fa717f54f80fdc2d9f30953d2cddcdeeee230e192c9d9f705ef,Metadata:&ContainerMetadata{Name:helm-test,Attempt:0,},Image:&ImageSpec{Image:docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:98f6c3b32d565299b035cc773a15cee165942450c44e11cdcaaf370d2c26dc31,State:CONTAINER_RUNNING,CreatedAt:1710543560698412223,Labels:map[string]string{io.kubernetes.container.name: helm-test,io.kubernetes.pod.name: helm-test,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ad2fde7-d630-46a1-a59e-ccbd85431dd2,},Annotations:map[string]string{io.kubernetes.container.hash: d2a6235,io.kubernetes.container.restartCount: 0,io.kubernetes.container.
terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d,PodSandboxId:83ab6e3801aab7994a05e51564e9b2157246c95747cce22ea6f9cb9d7b8299f7,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710543538005671829,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-7d69788767-l2z4d,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 727dcb11-15e2-441c-a762-621a8942accd,},Annotations:map[string]string{io.kubernetes.container.hash: a64da723,io.kubernetes.container.ports: [{\"containerPort\":
8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4786fa6f76feb0e4695c3767f2d295550c887027806d4e640b1ed2c33852dcf6,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:738351fd438f02c0fa796f623f5ec066f7431608d8c20524e0a109871454298c,State:CONTAINER_RUNNING,CreatedAt:1710543535199193409,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annot
ations:map[string]string{io.kubernetes.container.hash: 88252db6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19838b931e7460baceac0b8189d00286e22e22705b7d4f2dd69e00bbfadbdbe5,PodSandboxId:6e3cb3b3c63ce8095d8eba3124b68f419d2f42d7944fa38564c2d3130f212793,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ffcc66479b5baa5a65f94b8b7c73c6ee5ed989ec0b7f8f9371999f335ce4f44c,State:CONTAINER_RUNNING,CreatedAt:1710543533787523324,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-76dc478dd8-ql9pj,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 36
1168d7-9d5a-4edc-9e63-9ec1f7e9b7f5,},Annotations:map[string]string{io.kubernetes.container.hash: 2f80d4d4,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:06f8bb05f3b3a57fd0fa5e2d17e260b81906a2efe2e7cf17e6301cfed1328a23,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Anno
tations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:931dbfd16f87c10b33e6aa2f32ac2d1beef37111d14c94af014c2c76f9326992,State:CONTAINER_RUNNING,CreatedAt:1710543526902595009,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: d9bac84e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf3826f24a4323581b8ad3132e05e89901b91df5312be3314c170b31bc98edf4,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefd
ba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e899260153aedc3a54e6b11ee23f11d96a01236ccd556fbd0372a49d07a7bdb8,State:CONTAINER_RUNNING,CreatedAt:1710543525212790897,Labels:map[string]string{io.kubernetes.container.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: becfc64,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11c825863e6ffd1c62a1bca156e971d19e235efde104336ba3d9b05b8b479bb5,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766
e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e255e073c508c2fe6cd5b51ba718297863d8ab7a2b57edfdd620eae7e26a2167,State:CONTAINER_RUNNING,CreatedAt:1710543524299376167,Labels:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: 88f7783,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:655c51734c4819649525feff7a6fa21900cee788c2bb050fc5750f78302d8675,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&Contain
erMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:88ef14a257f4247460be80e11f16d5ed7cc19e765df128c71515d8d7327e64c1,State:CONTAINER_RUNNING,CreatedAt:1710543522057184797,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: 321e75e6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c1175da4d0dd405e88935f02d8f4debe8afbb09b7b074c429a0bfe0c76e26ffd,PodSandboxId:e5db03df72313af84a79fd60
7118c150525ce226c8d0c56ee5787df05852842a,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:59cbb42146a373fccdb496ee1d8f7de9213c9690266417fa7c1ea2c72b7173eb,State:CONTAINER_RUNNING,CreatedAt:1710543519933453414,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba76f6d6-961f-4d78-96ee-b5169360170f,},Annotations:map[string]string{io.kubernetes.container.hash: b310b014,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1da0b76fe97da4482bb4d7523b797b96c494d64cf64033a99db52310b744fc51,PodSandboxI
d:addfe5861da3270c68257ebb878a44bf3a6521f26e8211087e85bfedfee8739c,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:19a639eda60f037e40b0cb441c26585857fe2ca83d07b2a979e8188c04a6192c,State:CONTAINER_RUNNING,CreatedAt:1710543518362506685,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: csi-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a08ba8ff-3283-4356-a495-7ebfd59456b6,},Annotations:map[string]string{io.kubernetes.container.hash: cf735ee8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e2ccb85347428904b0d886921c99948947d95eda789a6d68969002
906fadff6f,PodSandboxId:09f869a8a040a44d76f06fa04550bcaba8b733afe32c5479e195d4c884a90d69,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a1ed5895ba6353a897f269c4919c8249f176ba9d8719a585dc6ed3cd861fe0a3,State:CONTAINER_RUNNING,CreatedAt:1710543516933350353,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-5g6gq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e6164251-098a-4dfd-9978-fdc4963327c3,},Annotations:map[string]string{io.kubernetes.container.hash: ffeceb47,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernete
s.pod.terminationGracePeriod: 30,},},&Container{Id:94fac4341225af4159fb24f9bda7b6f9b91e0bb26e3b9407c87986356a38477c,PodSandboxId:4445e761fa88d085b175f309b40fda36f32d52359aad1fc66df5d30f409405d1,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1710543515445253564,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-78b46b4d5c-89rpj,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 7c3085eb-ecc4-46f2-87d6-e598137d5c05,},Annotations:map[string]string{io.kubernetes.container.hash: 19baefef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-lo
g,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46a38da70685745709b0a8add051c14a539203fec616b87637676631ddb9d141,PodSandboxId:4b42b87d89dd1b4d09f683a5a1f71dfa6a5b02c4ce3abcadc159b7b321321839,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710543512471834636,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-gm4wh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bee3baf8-f3b4-4a5d-8724-f2c5356c9d59,},Annotations:map[string]string{io.kubernetes.container.hash: dd1bef0f,io.kubernetes.container.restartCo
unt: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e324c32f75a06a80a991cb9e626e87d6fc5ebb8a8bde9cecfc8534894d2a100,PodSandboxId:284fbccc261c77473967e88657e94ae522b66a8a75803407a9aba51cb6d241d8,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bfb18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710543503304620706,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-rb286,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: e8f245e4-2a42-4c1e-bc01-a560ebc55844,},Annotations:map[string]string{io.kubernetes.container.hash: e0e945f1,io.kubernetes.container.ports:
[{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2d33b8f4de434d9d48ca9c230669338c8e02e74d17e90c51e7a5cfb18e57876f,PodSandboxId:c28cfa3974156fafaf7c716a0abc13a73d3ef225a67da9186b9a779fd42fc92e,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aa61ee9c70bc45a33684b5bb1a76e214cb8a51c9d9ae3d06920b60c8cd4cf21c,State:CONTAINER_RUNNING,CreatedAt:1710543497943649190,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-wvz4s,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 0d796667-8f3f-4044-bb24-25cd1713ebc2,},Annotations:map[string]string{io.kubernetes.container.hash: 124ac715,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b29aeda777bc20479de942b76b75ce3dae615e4afe9f16c13c755b43c31edd91,PodSandboxId:19ac46d8a10ab197c43163c81f8923ce70b680ebd7839c59ce15dbd8e3016081,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1499ed4fbd0aa6ea742ab6bce25603aa33556e1ac0e2f24a4901a675247e538a,State:CONTAINER_RUNNING,CreatedAt:1710543490970760534,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-
dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: beb46bcd-db3c-4022-9b99-e6a29dbf5543,},Annotations:map[string]string{io.kubernetes.container.hash: 73a0034,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1154b624dfb6077d08bbb09a92bf82a501f0afeafbbbda30d4361796faf7594a,PodSandboxId:b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794,Metadata:&ContainerMetadata{Name:tiller,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:3f39089e90831c3ef411fe78d2ac642187b617feacacbf72e3f27e28c8dea487,State:CONTAINER_RUNNING,CreatedAt:1710543481122731057,Labels:map[s
tring]string{io.kubernetes.container.name: tiller,io.kubernetes.pod.name: tiller-deploy-7b677967b9-5s4t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159edcb2-34c6-484f-b9c1-7b4d9f4cc492,},Annotations:map[string]string{io.kubernetes.container.hash: 887abf11,io.kubernetes.container.ports: [{\"name\":\"tiller\",\"containerPort\":44134,\"protocol\":\"TCP\"},{\"name\":\"http\",\"containerPort\":44135,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f5f4c88221743878c4a8526045d1c79a557c09ac331320962e0a01efd31b85,PodSandboxId:00183462ad3c8237be611446ea62596a1dc0db7d4076e07f144e11f6f835c5e3,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca,Annotations
:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b9a5a1927366a21e45606fe303f1d287adcb1e09d1be13dd44bdb4cf29146c86,State:CONTAINER_RUNNING,CreatedAt:1710543478088923910,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-69cf46c98-spvr4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 673c996b-9f13-4f55-a0da-458b3f9d201d,},Annotations:map[string]string{io.kubernetes.container.hash: 667274d8,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8be174bf553d958fbd2d852024f00e8cc1e7754ce8823c0fe85df5c0badcb21a,PodSandboxId:23b937b80961f026935619222061a45c3b5a3280a1f308b3f7a3ac946e41a309,Metadata:&ContainerMetadata{Name:cloud-spanner-emulator,Attempt:0,},Imag
e:&ImageSpec{Image:gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:35eab485356b42d23b307a833f61565766d6421917fd7176f994c3fc04555a2c,State:CONTAINER_RUNNING,CreatedAt:1710543476235540955,Labels:map[string]string{io.kubernetes.container.name: cloud-spanner-emulator,io.kubernetes.pod.name: cloud-spanner-emulator-6548d5df46-mb9fc,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 9e19dae8-0109-477d-a76f-5805ec456869,},Annotations:map[string]string{io.kubernetes.container.hash: a862de7e,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":9020,\"protocol\":\"TCP\"},{\"name\":\"grpc\",\"containerPort\":9010,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{I
d:29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78,PodSandboxId:f2e2e22b8b41d6e6352e16bfccb7526bab05f6a93ae02d5a78f1e5d596087138,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710543462474927371,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c400f5a-33f3-460d-a136-9d1ff87f0009,},Annotations:map[string]string{io.kubernetes.container.hash: 506fb218,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ecce4d992f
b313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23,PodSandboxId:f057691998e9ab9d2d7e34d8e6e0e620ef0f03aa6bcdac1239fa5864ef0b694b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710543453910561060,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-p6s6d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7caaa4dc-1836-4020-b722-90edda2d212b,},Annotations:map[string]string{io.kubernetes.container.hash: 98b00e74,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3,PodSandboxId:5ef770e5867f87e50a902a6e9baaa2f8b75ab65acbe55931f3fa31caedb55e9b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710543452759732400,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zspm2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 11f770a3-08d0-4140-a786-578f0feee2bd,},Annotations:map[string]string{io.kubernetes.container.hash: f8e87be9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c05e642c2e04d68f8ddb17d589ac545b7d6e6455cc4cbb87ea05be405497d75c,PodSandboxId:3f955150cb63f69550897a65eb1ff72327af48f25bb92e2fdb3c0bce6ecae530,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710543433858697290,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9790ed80546000baa27b620fb3443e56,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3ed8e1034ff660949fd2f7a2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a,PodSandboxId:18e7af9df8f8de08366f575abc22d6aee6d9d363e6cb9ac2ee7698b01aff111a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710543433816571257,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5c71846b6b3b995488933cf77e54c962,},Annotations:map[string]string{io.kubernetes.container.hash: 2f963277,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.
kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e,PodSandboxId:4e5a96c86a833251a1e38774da4c72b50ef65a7658e4423de445e50045fafbb1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710543433886749379,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8c603c33ec844a8d77f6277024bcd906,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.contai
ner.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939,PodSandboxId:d02d2c9bfcd714ad35667ad29eebcec0458bcec0ca5c4fa174a46c6c50e63859,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710543433785318568,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-097314,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc0a24d7dad3447463c10be999460f46,},Annotations:map[string]string{io.kubernetes.container.hash: 771cd21a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMe
ssagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c2d2fc58-e5d5-40d2-b2b6-79895c27e38f name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.994829941Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:nil,LabelSelector:map[string]string{io.kubernetes.pod.uid: 159edcb2-34c6-484f-b9c1-7b4d9f4cc492,},},}" file="otel-collector/interceptors.go:62" id=bd59851f-6211-4ac3-a905-11ecb12aaf90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.994944512Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794,Metadata:&PodSandboxMetadata{Name:tiller-deploy-7b677967b9-5s4t7,Uid:159edcb2-34c6-484f-b9c1-7b4d9f4cc492,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710543459288647947,Labels:map[string]string{app: helm,io.kubernetes.container.name: POD,io.kubernetes.pod.name: tiller-deploy-7b677967b9-5s4t7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 159edcb2-34c6-484f-b9c1-7b4d9f4cc492,name: tiller,pod-template-hash: 7b677967b9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-15T22:57:38.668764268Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bd59851f-6211-4ac3-a905-11ecb12aaf90 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.995518226Z" level=debug msg="Request: &PortForwardRequest{PodSandboxId:b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794,Port:[],}" file="otel-collector/interceptors.go:62" id=b29eb312-9cb6-4f1b-a35e-1d5f3e830625 name=/runtime.v1.RuntimeService/PortForward
	Mar 15 22:59:20 addons-097314 crio[671]: time="2024-03-15 22:59:20.995651093Z" level=debug msg="Response: &PortForwardResponse{Url:http://127.0.0.1:41349/portforward/8az-JJOG,}" file="otel-collector/interceptors.go:74" id=b29eb312-9cb6-4f1b-a35e-1d5f3e830625 name=/runtime.v1.RuntimeService/PortForward
	Mar 15 22:59:21 addons-097314 crio[671]: time="2024-03-15 22:59:21.004216053Z" level=info msg="Starting port forward for b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794 in network namespace /var/run/netns/f3a49ddc-5d84-4038-8f77-b282eaa85aff" file="oci/runtime_oci_linux.go:18"
	Mar 15 22:59:21 addons-097314 crio[671]: time="2024-03-15 22:59:21.005079682Z" level=debug msg="PortForward (id: b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794, port: 44134): copy data from container to client" file="oci/runtime_oci_linux.go:48"
	Mar 15 22:59:21 addons-097314 crio[671]: time="2024-03-15 22:59:21.005093671Z" level=debug msg="PortForward (id: b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794, port: 44134): copy data from client to container" file="oci/runtime_oci_linux.go:48"
	Mar 15 22:59:21 addons-097314 conmon[7040]: conmon 6e2d271c7207ac5d8c93 <nwarn>: stdio_input read failed Input/output error
	Mar 15 22:59:21 addons-097314 crio[671]: time="2024-03-15 22:59:21.020737643Z" level=debug msg="PortForward (id: b2168a0f6202f4e80058db017e4a106bea75b16ec3bb1e58e74c2dcfd5332794, port: 44134): stop forwarding in direction: <nil>" file="oci/runtime_oci_linux.go:48"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD
	6e2d271c7207a       docker.io/alpine/helm@sha256:9d9fab00e0680f1328924429925595dfe96a68531c8a9c1518d05ee2ad45c36f                                                Less than a second ago   Exited              helm-test                                0                   d473a55918364       helm-test
	35b01fecfd3c0       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:c6db0381372939368364efb98d8c90f4b0e3d86a5637682b85a01195937d9eff                            21 seconds ago           Exited              gadget                                   2                   9d8e093e0f3da       gadget-bzqbb
	24e773cc47dab       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                                 23 seconds ago           Running             gcp-auth                                 0                   83ab6e3801aab       gcp-auth-7d69788767-l2z4d
	4786fa6f76feb       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          25 seconds ago           Running             csi-snapshotter                          0                   09f869a8a040a       csi-hostpathplugin-5g6gq
	19838b931e746       registry.k8s.io/ingress-nginx/controller@sha256:42b3f0e5d0846876b1791cd3afeb5f1cbbe4259d6f35651dcc1b5c980925379c                             27 seconds ago           Running             controller                               0                   6e3cb3b3c63ce       ingress-nginx-controller-76dc478dd8-ql9pj
	06f8bb05f3b3a       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          34 seconds ago           Running             csi-provisioner                          0                   09f869a8a040a       csi-hostpathplugin-5g6gq
	cf3826f24a432       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            35 seconds ago           Running             liveness-probe                           0                   09f869a8a040a       csi-hostpathplugin-5g6gq
	11c825863e6ff       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           36 seconds ago           Running             hostpath                                 0                   09f869a8a040a       csi-hostpathplugin-5g6gq
	655c51734c481       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                39 seconds ago           Running             node-driver-registrar                    0                   09f869a8a040a       csi-hostpathplugin-5g6gq
	c1175da4d0dd4       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             41 seconds ago           Running             csi-attacher                             0                   e5db03df72313       csi-hostpath-attacher-0
	1da0b76fe97da       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              42 seconds ago           Running             csi-resizer                              0                   addfe5861da32       csi-hostpath-resizer-0
	e2ccb85347428       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   44 seconds ago           Running             csi-external-health-monitor-controller   0                   09f869a8a040a       csi-hostpathplugin-5g6gq
	bf304fd980426       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   45 seconds ago           Exited              patch                                    0                   570bc8c2c75ef       ingress-nginx-admission-patch-f89sv
	94fac4341225a       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             45 seconds ago           Running             local-path-provisioner                   0                   4445e761fa88d       local-path-provisioner-78b46b4d5c-89rpj
	46a38da706857       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      48 seconds ago           Running             volume-snapshot-controller               0                   4b42b87d89dd1       snapshot-controller-58dbcc7b99-gm4wh
	4ffc2a54ba4a4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023                   48 seconds ago           Exited              create                                   0                   c018dddd54c69       ingress-nginx-admission-create-tdwdz
	1e324c32f75a0       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                              57 seconds ago           Running             yakd                                     0                   284fbccc261c7       yakd-dashboard-9947fc6bf-rb286
	2d33b8f4de434       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago       Running             volume-snapshot-controller               0                   c28cfa3974156       snapshot-controller-58dbcc7b99-wvz4s
	b29aeda777bc2       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago       Running             minikube-ingress-dns                     0                   19ac46d8a10ab       kube-ingress-dns-minikube
	1154b624dfb60       ghcr.io/helm/tiller@sha256:4c43eb385032945cad047d2350e4945d913b90b3ab43ee61cecb32a495c6df0f                                                  About a minute ago       Running             tiller                                   0                   b2168a0f6202f       tiller-deploy-7b677967b9-5s4t7
	c6f5f4c882217       registry.k8s.io/metrics-server/metrics-server@sha256:1a7c305befcde0bb325ce081e97834f471cb9b7efa337bb52201caf3ed9ffaca                        About a minute ago       Running             metrics-server                           0                   00183462ad3c8       metrics-server-69cf46c98-spvr4
	8be174bf553d9       gcr.io/cloud-spanner-emulator/emulator@sha256:41d5dccfcf13817a2348beba0ca7c650ffdd795f7fcbe975b7822c9eed262e15                               About a minute ago       Running             cloud-spanner-emulator                   0                   23b937b80961f       cloud-spanner-emulator-6548d5df46-mb9fc
	29ada771117de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago       Running             storage-provisioner                      0                   f2e2e22b8b41d       storage-provisioner
	ecce4d992fb31       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago       Running             coredns                                  0                   f057691998e9a       coredns-5dd5756b68-p6s6d
	bb754aa5ded80       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                                             About a minute ago       Running             kube-proxy                               0                   5ef770e5867f8       kube-proxy-zspm2
	659078f5add23       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                                             2 minutes ago            Running             kube-scheduler                           0                   4e5a96c86a833       kube-scheduler-addons-097314
	c05e642c2e04d       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                                             2 minutes ago            Running             kube-controller-manager                  0                   3f955150cb63f       kube-controller-manager-addons-097314
	3ed8e1034ff66       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago            Running             etcd                                     0                   18e7af9df8f8d       etcd-addons-097314
	4403cafbe1aff       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                                             2 minutes ago            Running             kube-apiserver                           0                   d02d2c9bfcd71       kube-apiserver-addons-097314
	
	
	==> coredns [ecce4d992fb313f07af591c1200ce697dd35d468821bbf611dcc4b7429259b23] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	[INFO] Reloading complete
	[INFO] 127.0.0.1:43283 - 17841 "HINFO IN 6454626148988223792.8343432799215500319. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021021614s
	[INFO] 10.244.0.22:38076 - 49108 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000504131s
	[INFO] 10.244.0.22:50309 - 56871 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000177763s
	[INFO] 10.244.0.22:47752 - 29698 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00012988s
	[INFO] 10.244.0.22:44958 - 57465 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000102932s
	[INFO] 10.244.0.22:58986 - 25833 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000110912s
	[INFO] 10.244.0.22:34448 - 55463 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000186864s
	[INFO] 10.244.0.22:60633 - 3662 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.003807109s
	[INFO] 10.244.0.22:53624 - 4084 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.004127163s
	[INFO] 10.244.0.25:37505 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000464611s
	[INFO] 10.244.0.25:46065 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000130821s
	
	
	==> describe nodes <==
	Name:               addons-097314
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-097314
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=addons-097314
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T22_57_19_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-097314
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-097314"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 22:57:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-097314
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 22:59:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 22:58:51 +0000   Fri, 15 Mar 2024 22:57:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 22:58:51 +0000   Fri, 15 Mar 2024 22:57:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 22:58:51 +0000   Fri, 15 Mar 2024 22:57:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 22:58:51 +0000   Fri, 15 Mar 2024 22:57:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.35
	  Hostname:    addons-097314
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 b25183c5e3414df78f950fb09dc6c38c
	  System UUID:                b25183c5-e341-4df7-8f95-0fb09dc6c38c
	  Boot ID:                    4fb98055-0700-4a85-9eae-f3b0ed873bc7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (22 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6548d5df46-mb9fc      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  gadget                      gadget-bzqbb                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  gcp-auth                    gcp-auth-7d69788767-l2z4d                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  ingress-nginx               ingress-nginx-controller-76dc478dd8-ql9pj    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         100s
	  kube-system                 coredns-5dd5756b68-p6s6d                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     109s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 csi-hostpathplugin-5g6gq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  kube-system                 etcd-addons-097314                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 helm-test                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4s
	  kube-system                 kube-apiserver-addons-097314                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-controller-manager-addons-097314        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  kube-system                 kube-proxy-zspm2                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         110s
	  kube-system                 kube-scheduler-addons-097314                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 metrics-server-69cf46c98-spvr4               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (5%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 snapshot-controller-58dbcc7b99-gm4wh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 snapshot-controller-58dbcc7b99-wvz4s         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 tiller-deploy-7b677967b9-5s4t7               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	  local-path-storage          local-path-provisioner-78b46b4d5c-89rpj      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-rb286               0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     101s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             588Mi (15%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 105s  kube-proxy       
	  Normal  Starting                 2m2s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m2s  kubelet          Node addons-097314 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m2s  kubelet          Node addons-097314 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m2s  kubelet          Node addons-097314 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m2s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m2s  kubelet          Node addons-097314 status is now: NodeReady
	  Normal  RegisteredNode           110s  node-controller  Node addons-097314 event: Registered Node addons-097314 in Controller
	
	
	==> dmesg <==
	[  +0.051069] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.167815] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.141924] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.233825] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.810346] systemd-fstab-generator[752]: Ignoring "noauto" option for root device
	[  +0.057204] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.921691] systemd-fstab-generator[919]: Ignoring "noauto" option for root device
	[  +0.457978] kauditd_printk_skb: 46 callbacks suppressed
	[  +5.780323] systemd-fstab-generator[1254]: Ignoring "noauto" option for root device
	[  +0.084713] kauditd_printk_skb: 41 callbacks suppressed
	[ +13.344667] systemd-fstab-generator[1485]: Ignoring "noauto" option for root device
	[  +0.046296] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003648] kauditd_printk_skb: 77 callbacks suppressed
	[  +5.068077] kauditd_printk_skb: 105 callbacks suppressed
	[  +8.690714] kauditd_printk_skb: 98 callbacks suppressed
	[Mar15 22:58] kauditd_printk_skb: 11 callbacks suppressed
	[  +9.621383] kauditd_printk_skb: 1 callbacks suppressed
	[ +24.457815] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.206184] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.274100] kauditd_printk_skb: 66 callbacks suppressed
	[  +6.867087] kauditd_printk_skb: 5 callbacks suppressed
	[  +5.770499] kauditd_printk_skb: 11 callbacks suppressed
	[Mar15 22:59] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.145272] kauditd_printk_skb: 23 callbacks suppressed
	[  +6.683918] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [3ed8e1034ff660949fd2f7a2763c128cb8a77e1de4919cb2c7c2df08d7d2fa5a] <==
	{"level":"warn","ts":"2024-03-15T22:58:43.827062Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.29437ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81493"}
	{"level":"info","ts":"2024-03-15T22:58:43.829132Z","caller":"traceutil/trace.go:171","msg":"trace[1032931903] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1084; }","duration":"133.367615ms","start":"2024-03-15T22:58:43.695754Z","end":"2024-03-15T22:58:43.829122Z","steps":["trace[1032931903] 'agreement among raft nodes before linearized reading'  (duration: 130.737116ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T22:58:50.034757Z","caller":"traceutil/trace.go:171","msg":"trace[908855674] linearizableReadLoop","detail":"{readStateIndex:1156; appliedIndex:1155; }","duration":"340.031758ms","start":"2024-03-15T22:58:49.694712Z","end":"2024-03-15T22:58:50.034744Z","steps":["trace[908855674] 'read index received'  (duration: 339.905525ms)","trace[908855674] 'applied index is now lower than readState.Index'  (duration: 125.808µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T22:58:50.035066Z","caller":"traceutil/trace.go:171","msg":"trace[291216179] transaction","detail":"{read_only:false; response_revision:1119; number_of_response:1; }","duration":"394.062386ms","start":"2024-03-15T22:58:49.640901Z","end":"2024-03-15T22:58:50.034964Z","steps":["trace[291216179] 'process raft request'  (duration: 393.758725ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:58:50.035201Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:58:49.640883Z","time spent":"394.253045ms","remote":"127.0.0.1:34732","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2186,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/snapshot-controller-58dbcc7b99\" mod_revision:1004 > success:<request_put:<key:\"/registry/replicasets/kube-system/snapshot-controller-58dbcc7b99\" value_size:2114 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/snapshot-controller-58dbcc7b99\" > >"}
	{"level":"warn","ts":"2024-03-15T22:58:50.035508Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"340.82688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:81493"}
	{"level":"info","ts":"2024-03-15T22:58:50.035534Z","caller":"traceutil/trace.go:171","msg":"trace[1726506838] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1119; }","duration":"340.863981ms","start":"2024-03-15T22:58:49.694663Z","end":"2024-03-15T22:58:50.035527Z","steps":["trace[1726506838] 'agreement among raft nodes before linearized reading'  (duration: 340.644486ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:58:50.035552Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:58:49.69465Z","time spent":"340.897674ms","remote":"127.0.0.1:34446","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":81515,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2024-03-15T22:58:50.035723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"215.664087ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10811"}
	{"level":"info","ts":"2024-03-15T22:58:50.035738Z","caller":"traceutil/trace.go:171","msg":"trace[1189402933] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1119; }","duration":"215.681629ms","start":"2024-03-15T22:58:49.820052Z","end":"2024-03-15T22:58:50.035734Z","steps":["trace[1189402933] 'agreement among raft nodes before linearized reading'  (duration: 215.634047ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:58:50.03588Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.725868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13721"}
	{"level":"info","ts":"2024-03-15T22:58:50.035895Z","caller":"traceutil/trace.go:171","msg":"trace[139769917] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1119; }","duration":"110.741862ms","start":"2024-03-15T22:58:49.925149Z","end":"2024-03-15T22:58:50.035891Z","steps":["trace[139769917] 'agreement among raft nodes before linearized reading'  (duration: 110.703422ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:58:50.046255Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.382586ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-03-15T22:58:50.046295Z","caller":"traceutil/trace.go:171","msg":"trace[791014904] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1119; }","duration":"163.432203ms","start":"2024-03-15T22:58:49.882851Z","end":"2024-03-15T22:58:50.046283Z","steps":["trace[791014904] 'agreement among raft nodes before linearized reading'  (duration: 153.10697ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T22:58:53.093348Z","caller":"traceutil/trace.go:171","msg":"trace[1029216254] linearizableReadLoop","detail":"{readStateIndex:1164; appliedIndex:1163; }","duration":"167.664824ms","start":"2024-03-15T22:58:52.925669Z","end":"2024-03-15T22:58:53.093334Z","steps":["trace[1029216254] 'read index received'  (duration: 166.8429ms)","trace[1029216254] 'applied index is now lower than readState.Index'  (duration: 821.084µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T22:58:53.094339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.627599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13721"}
	{"level":"info","ts":"2024-03-15T22:58:53.094416Z","caller":"traceutil/trace.go:171","msg":"trace[572871056] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1126; }","duration":"168.761032ms","start":"2024-03-15T22:58:52.925646Z","end":"2024-03-15T22:58:53.094407Z","steps":["trace[572871056] 'agreement among raft nodes before linearized reading'  (duration: 167.902203ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T22:59:18.908961Z","caller":"traceutil/trace.go:171","msg":"trace[350581482] linearizableReadLoop","detail":"{readStateIndex:1391; appliedIndex:1390; }","duration":"418.652532ms","start":"2024-03-15T22:59:18.490295Z","end":"2024-03-15T22:59:18.908947Z","steps":["trace[350581482] 'read index received'  (duration: 418.516911ms)","trace[350581482] 'applied index is now lower than readState.Index'  (duration: 135.105µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T22:59:18.909212Z","caller":"traceutil/trace.go:171","msg":"trace[1873374545] transaction","detail":"{read_only:false; response_revision:1344; number_of_response:1; }","duration":"441.256914ms","start":"2024-03-15T22:59:18.467946Z","end":"2024-03-15T22:59:18.909203Z","steps":["trace[1873374545] 'process raft request'  (duration: 440.906777ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:18.909345Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:18.467929Z","time spent":"441.33044ms","remote":"127.0.0.1:34518","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-gu22rgt737n4yetukht3e5wla4\" mod_revision:1230 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-gu22rgt737n4yetukht3e5wla4\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-gu22rgt737n4yetukht3e5wla4\" > >"}
	{"level":"warn","ts":"2024-03-15T22:59:18.909487Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"134.368459ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:16 size:73465"}
	{"level":"info","ts":"2024-03-15T22:59:18.910233Z","caller":"traceutil/trace.go:171","msg":"trace[352321207] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:16; response_revision:1344; }","duration":"135.120343ms","start":"2024-03-15T22:59:18.775102Z","end":"2024-03-15T22:59:18.910223Z","steps":["trace[352321207] 'agreement among raft nodes before linearized reading'  (duration: 134.239133ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:18.909528Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"419.299912ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T22:59:18.91112Z","caller":"traceutil/trace.go:171","msg":"trace[504965360] range","detail":"{range_begin:/registry/horizontalpodautoscalers/; range_end:/registry/horizontalpodautoscalers0; response_count:0; response_revision:1344; }","duration":"420.885727ms","start":"2024-03-15T22:59:18.490222Z","end":"2024-03-15T22:59:18.911107Z","steps":["trace[504965360] 'agreement among raft nodes before linearized reading'  (duration: 419.288579ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-15T22:59:18.911202Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-15T22:59:18.490199Z","time spent":"420.983995ms","remote":"127.0.0.1:34488","response type":"/etcdserverpb.KV/Range","request count":0,"request size":76,"response count":0,"response size":27,"request content":"key:\"/registry/horizontalpodautoscalers/\" range_end:\"/registry/horizontalpodautoscalers0\" count_only:true "}
	
	
	==> gcp-auth [24e773cc47dab3e0ee61849f44d4575ec77327c92b30b8f1ac654652c39c6b0d] <==
	2024/03/15 22:58:58 GCP Auth Webhook started!
	2024/03/15 22:58:59 Ready to marshal response ...
	2024/03/15 22:58:59 Ready to write response ...
	2024/03/15 22:58:59 Ready to marshal response ...
	2024/03/15 22:58:59 Ready to write response ...
	2024/03/15 22:59:09 Ready to marshal response ...
	2024/03/15 22:59:09 Ready to write response ...
	2024/03/15 22:59:10 Ready to marshal response ...
	2024/03/15 22:59:10 Ready to write response ...
	2024/03/15 22:59:17 Ready to marshal response ...
	2024/03/15 22:59:17 Ready to write response ...
	
	
	==> kernel <==
	 22:59:21 up 2 min,  0 users,  load average: 3.05, 1.54, 0.59
	Linux addons-097314 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4403cafbe1aff3b3023c910212398b5d5c78cf376b90d164bca2970c058b1939] <==
	I0315 22:57:41.103078       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0315 22:57:41.114179       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller" clusterIPs={"IPv4":"10.97.1.135"}
	I0315 22:57:41.161103       1 alloc.go:330] "allocated clusterIPs" service="ingress-nginx/ingress-nginx-controller-admission" clusterIPs={"IPv4":"10.96.180.62"}
	I0315 22:57:41.222455       1 controller.go:624] quota admission added evaluator for: jobs.batch
	W0315 22:57:42.199039       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 22:57:43.201958       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 22:57:43.202503       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 22:57:43.303527       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-attacher" clusterIPs={"IPv4":"10.106.155.254"}
	I0315 22:57:43.334926       1 controller.go:624] quota admission added evaluator for: statefulsets.apps
	I0315 22:57:43.551502       1 alloc.go:330] "allocated clusterIPs" service="kube-system/csi-hostpath-resizer" clusterIPs={"IPv4":"10.97.120.248"}
	W0315 22:57:44.720177       1 aggregator.go:166] failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0315 22:57:45.601195       1 alloc.go:330] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.105.144.182"}
	W0315 22:57:59.413765       1 handler_proxy.go:93] no RequestInfo found in the context
	E0315 22:57:59.413841       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0315 22:57:59.414237       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.172.233:443: connect: connection refused
	I0315 22:57:59.414370       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0315 22:57:59.415498       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.172.233:443: connect: connection refused
	E0315 22:57:59.420852       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.172.233:443: connect: connection refused
	E0315 22:57:59.443047       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1: Get "https://10.104.172.233:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.104.172.233:443: connect: connection refused
	I0315 22:57:59.557877       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 22:58:16.084859       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0315 22:59:16.085567       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [c05e642c2e04d68f8ddb17d589ac545b7d6e6455cc4cbb87ea05be405497d75c] <==
	I0315 22:58:43.850084       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0315 22:58:43.850226       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0315 22:58:43.864754       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0315 22:58:43.864932       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0315 22:58:43.865317       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-patch" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0315 22:58:43.865352       1 event.go:307] "Event occurred" object="gcp-auth/gcp-auth-certs-create" fieldPath="" kind="Job" apiVersion="batch/v1" type="Normal" reason="Completed" message="Job completed"
	I0315 22:58:50.050393       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="412.84007ms"
	I0315 22:58:50.050657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="192.093µs"
	I0315 22:58:54.486506       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="82.095µs"
	I0315 22:58:58.548241       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="11.689593ms"
	I0315 22:58:58.548352       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-7d69788767" duration="57.83µs"
	I0315 22:58:59.055085       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="WaitForFirstConsumer" message="waiting for first consumer to be created before binding"
	I0315 22:58:59.073189       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 22:58:59.209662       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 22:59:01.823137       1 event.go:307] "Event occurred" object="default/test-pvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'rancher.io/local-path' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 22:59:01.823179       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0315 22:59:05.599723       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="34.713444ms"
	I0315 22:59:05.602447       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="83.139µs"
	I0315 22:59:11.844371       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-78b46b4d5c" duration="8.129µs"
	I0315 22:59:13.028820       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0315 22:59:13.031518       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0315 22:59:13.080416       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-create"
	I0315 22:59:13.081702       1 job_controller.go:562] "enqueueing job" key="gcp-auth/gcp-auth-certs-patch"
	I0315 22:59:13.596506       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="14.657µs"
	I0315 22:59:16.822739       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	
	
	==> kube-proxy [bb754aa5ded8088933a76db1d81119b8c64378ee10956dd39f8ea0b508ff00c3] <==
	I0315 22:57:33.982162       1 server_others.go:69] "Using iptables proxy"
	I0315 22:57:34.262608       1 node.go:141] Successfully retrieved node IP: 192.168.39.35
	I0315 22:57:35.766044       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 22:57:35.766089       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 22:57:35.769569       1 server_others.go:152] "Using iptables Proxier"
	I0315 22:57:35.769631       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 22:57:35.769789       1 server.go:846] "Version info" version="v1.28.4"
	I0315 22:57:35.769822       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 22:57:35.771078       1 config.go:188] "Starting service config controller"
	I0315 22:57:35.771127       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 22:57:35.771161       1 config.go:97] "Starting endpoint slice config controller"
	I0315 22:57:35.771165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 22:57:35.771523       1 config.go:315] "Starting node config controller"
	I0315 22:57:35.771529       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 22:57:35.872158       1 shared_informer.go:318] Caches are synced for node config
	I0315 22:57:35.872207       1 shared_informer.go:318] Caches are synced for service config
	I0315 22:57:35.872235       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [659078f5add2366bcd8754c7e3ab4d06066d9cb870f017ad1ee1cf9b111c211e] <==
	W0315 22:57:16.237627       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 22:57:16.237658       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 22:57:17.038272       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 22:57:17.038301       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 22:57:17.130127       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 22:57:17.130261       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 22:57:17.142694       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 22:57:17.142766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 22:57:17.154634       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 22:57:17.154808       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 22:57:17.183940       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 22:57:17.184048       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 22:57:17.215692       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 22:57:17.215795       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 22:57:17.220792       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 22:57:17.220903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 22:57:17.240606       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 22:57:17.240698       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 22:57:17.306266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 22:57:17.306310       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 22:57:17.444575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 22:57:17.444638       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 22:57:17.452059       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 22:57:17.452101       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0315 22:57:19.714419       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 15 22:59:17 addons-097314 kubelet[1261]: E0315 22:59:17.027386    1261 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b61d750f-abfa-43b9-80de-36838fccd642" containerName="registry-test"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: E0315 22:59:17.027399    1261 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f08323c1-5f57-4428-ab07-fa1dd1960c2c" containerName="registry"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: E0315 22:59:17.027413    1261 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="03d529e0-4bcd-4fa9-a95b-2921fe26e9cc" containerName="registry-proxy"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: E0315 22:59:17.027420    1261 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b3a5c215-98e8-4e7f-af90-4d6d4f24f664" containerName="helper-pod"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: E0315 22:59:17.027427    1261 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2e033f82-a2e7-42b2-9052-980b0046daa3" containerName="nvidia-device-plugin-ctr"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.027467    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="2e033f82-a2e7-42b2-9052-980b0046daa3" containerName="nvidia-device-plugin-ctr"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.027476    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="b61d750f-abfa-43b9-80de-36838fccd642" containerName="registry-test"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.027483    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="b3a5c215-98e8-4e7f-af90-4d6d4f24f664" containerName="helper-pod"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.027490    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="f08323c1-5f57-4428-ab07-fa1dd1960c2c" containerName="registry"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.027501    1261 memory_manager.go:346] "RemoveStaleState removing state" podUID="03d529e0-4bcd-4fa9-a95b-2921fe26e9cc" containerName="registry-proxy"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.086907    1261 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2487b\" (UniqueName: \"kubernetes.io/projected/8ad2fde7-d630-46a1-a59e-ccbd85431dd2-kube-api-access-2487b\") pod \"helm-test\" (UID: \"8ad2fde7-d630-46a1-a59e-ccbd85431dd2\") " pod="kube-system/helm-test"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.258336    1261 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b3a5c215-98e8-4e7f-af90-4d6d4f24f664" path="/var/lib/kubelet/pods/b3a5c215-98e8-4e7f-af90-4d6d4f24f664/volumes"
	Mar 15 22:59:17 addons-097314 kubelet[1261]: I0315 22:59:17.329613    1261 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Mar 15 22:59:19 addons-097314 kubelet[1261]: I0315 22:59:19.273248    1261 scope.go:117] "RemoveContainer" containerID="b72590f9af184ebf01e1b175e26d782723b8121d7ff05531b672efedcea9509a"
	Mar 15 22:59:19 addons-097314 kubelet[1261]: E0315 22:59:19.304716    1261 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 22:59:19 addons-097314 kubelet[1261]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 22:59:19 addons-097314 kubelet[1261]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 22:59:19 addons-097314 kubelet[1261]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 22:59:19 addons-097314 kubelet[1261]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 22:59:19 addons-097314 kubelet[1261]: I0315 22:59:19.520307    1261 scope.go:117] "RemoveContainer" containerID="08fe336b2f0cd7eb88f363fc52cc7b002cd60289526c630346e891666590a9e0"
	Mar 15 22:59:19 addons-097314 kubelet[1261]: I0315 22:59:19.547814    1261 scope.go:117] "RemoveContainer" containerID="38acfdaa41492681b6b2a404bbdec1f20036e0b040883e185fd2d41f90572f77"
	Mar 15 22:59:19 addons-097314 kubelet[1261]: I0315 22:59:19.589384    1261 scope.go:117] "RemoveContainer" containerID="f9f9d145572404d5a5b58bd6e8a6065a87382e9550235d4d68c854ff28e27d4a"
	Mar 15 22:59:19 addons-097314 kubelet[1261]: I0315 22:59:19.625578    1261 scope.go:117] "RemoveContainer" containerID="ea705a66d3fa64ec47df248be1857cb2c85d511713478605408f744df2296477"
	Mar 15 22:59:20 addons-097314 kubelet[1261]: I0315 22:59:20.847079    1261 kubelet_pods.go:906] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/helm-test" secret="" err="secret \"gcp-auth\" not found"
	Mar 15 22:59:20 addons-097314 kubelet[1261]: E0315 22:59:20.915788    1261 remote_runtime.go:557] "Attach container from runtime service failed" err="rpc error: code = Unknown desc = unable to prepare attach endpoint" containerID="6e2d271c7207ac5d8c93b6d9197d4e148cafeb09f8aa705b77a932a868d75162"
	
	
	==> storage-provisioner [29ada771117de1ffb83881ba5dbb61e46f88cbbe82c1e0aa449ec9b7dbd19a78] <==
	I0315 22:57:43.890334       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0315 22:57:43.958136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0315 22:57:43.958229       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0315 22:57:43.975689       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0315 22:57:43.976208       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-097314_fbbaef0b-1223-402a-a6cc-cd3bdb8ba88c!
	I0315 22:57:43.981273       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d7c377ec-d65e-487d-baf9-e123acc43c72", APIVersion:"v1", ResourceVersion:"788", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-097314_fbbaef0b-1223-402a-a6cc-cd3bdb8ba88c became leader
	I0315 22:57:44.077434       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-097314_fbbaef0b-1223-402a-a6cc-cd3bdb8ba88c!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-097314 -n addons-097314
helpers_test.go:261: (dbg) Run:  kubectl --context addons-097314 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-tdwdz ingress-nginx-admission-patch-f89sv
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-097314 describe pod ingress-nginx-admission-create-tdwdz ingress-nginx-admission-patch-f89sv
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-097314 describe pod ingress-nginx-admission-create-tdwdz ingress-nginx-admission-patch-f89sv: exit status 1 (67.231954ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-tdwdz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-f89sv" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-097314 describe pod ingress-nginx-admission-create-tdwdz ingress-nginx-admission-patch-f89sv: exit status 1
--- FAIL: TestAddons/parallel/MetricsServer (8.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-097314
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-097314: exit status 82 (2m0.487658634s)

                                                
                                                
-- stdout --
	* Stopping node "addons-097314"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-097314" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-097314
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-097314: exit status 11 (21.594393498s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.35:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-097314" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-097314
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-097314: exit status 11 (6.139778185s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.35:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-097314" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-097314
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-097314: exit status 11 (6.143069314s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.35:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-097314" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image ls --format short --alsologtostderr: (2.304541202s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332624 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332624 image ls --format short --alsologtostderr:
I0315 23:09:48.621343   91754 out.go:291] Setting OutFile to fd 1 ...
I0315 23:09:48.621551   91754 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:48.621565   91754 out.go:304] Setting ErrFile to fd 2...
I0315 23:09:48.621572   91754 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:48.621886   91754 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
I0315 23:09:48.622697   91754 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:48.622856   91754 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:48.623497   91754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:48.623549   91754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:48.639440   91754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44453
I0315 23:09:48.639940   91754 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:48.640551   91754 main.go:141] libmachine: Using API Version  1
I0315 23:09:48.640585   91754 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:48.640988   91754 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:48.641217   91754 main.go:141] libmachine: (functional-332624) Calling .GetState
I0315 23:09:48.643385   91754 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:48.643440   91754 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:48.658408   91754 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44841
I0315 23:09:48.658888   91754 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:48.659415   91754 main.go:141] libmachine: Using API Version  1
I0315 23:09:48.659438   91754 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:48.659734   91754 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:48.659905   91754 main.go:141] libmachine: (functional-332624) Calling .DriverName
I0315 23:09:48.660108   91754 ssh_runner.go:195] Run: systemctl --version
I0315 23:09:48.660139   91754 main.go:141] libmachine: (functional-332624) Calling .GetSSHHostname
I0315 23:09:48.662887   91754 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:48.663335   91754 main.go:141] libmachine: (functional-332624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:96:04", ip: ""} in network mk-functional-332624: {Iface:virbr1 ExpiryTime:2024-03-16 00:06:17 +0000 UTC Type:0 Mac:52:54:00:6b:96:04 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-332624 Clientid:01:52:54:00:6b:96:04}
I0315 23:09:48.663372   91754 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined IP address 192.168.39.209 and MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:48.663469   91754 main.go:141] libmachine: (functional-332624) Calling .GetSSHPort
I0315 23:09:48.663664   91754 main.go:141] libmachine: (functional-332624) Calling .GetSSHKeyPath
I0315 23:09:48.663813   91754 main.go:141] libmachine: (functional-332624) Calling .GetSSHUsername
I0315 23:09:48.663960   91754 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/functional-332624/id_rsa Username:docker}
I0315 23:09:48.786527   91754 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 23:09:50.850855   91754 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.064286s)
W0315 23:09:50.850947   91754 cache_images.go:715] Failed to list images for profile functional-332624 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E0315 23:09:50.832614    7728 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = context deadline exceeded" filter="&ImageFilter{Image:&ImageSpec{Image:,Annotations:map[string]string{},UserSpecifiedImage:,},}"
time="2024-03-15T23:09:50Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = context deadline exceeded"
I0315 23:09:50.851004   91754 main.go:141] libmachine: Making call to close driver server
I0315 23:09:50.851021   91754 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:50.851309   91754 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:50.851347   91754 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:50.851361   91754 main.go:141] libmachine: Making call to close driver server
I0315 23:09:50.851362   91754 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
I0315 23:09:50.851369   91754 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:50.851615   91754 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:50.851632   91754 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
I0315 23:09:50.851645   91754 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:274: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (142.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 node stop m02 -v=7 --alsologtostderr
E0315 23:14:49.365236   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:15:30.325474   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.496114619s)

                                                
                                                
-- stdout --
	* Stopping node "ha-285481-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:14:46.404408   95817 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:14:46.404523   95817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:14:46.404532   95817 out.go:304] Setting ErrFile to fd 2...
	I0315 23:14:46.404536   95817 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:14:46.404718   95817 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:14:46.404964   95817 mustload.go:65] Loading cluster: ha-285481
	I0315 23:14:46.405306   95817 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:14:46.405322   95817 stop.go:39] StopHost: ha-285481-m02
	I0315 23:14:46.405672   95817 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:14:46.405709   95817 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:14:46.422427   95817 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37181
	I0315 23:14:46.422860   95817 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:14:46.423444   95817 main.go:141] libmachine: Using API Version  1
	I0315 23:14:46.423471   95817 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:14:46.423831   95817 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:14:46.426332   95817 out.go:177] * Stopping node "ha-285481-m02"  ...
	I0315 23:14:46.428070   95817 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 23:14:46.428104   95817 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:14:46.428337   95817 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 23:14:46.428364   95817 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:14:46.431511   95817 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:14:46.431913   95817 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:14:46.431944   95817 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:14:46.432061   95817 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:14:46.432230   95817 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:14:46.432384   95817 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:14:46.432516   95817 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:14:46.521053   95817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 23:14:46.576216   95817 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 23:14:46.632381   95817 main.go:141] libmachine: Stopping "ha-285481-m02"...
	I0315 23:14:46.632413   95817 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:14:46.634120   95817 main.go:141] libmachine: (ha-285481-m02) Calling .Stop
	I0315 23:14:46.637806   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 0/120
	I0315 23:14:47.639233   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 1/120
	I0315 23:14:48.641137   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 2/120
	I0315 23:14:49.642300   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 3/120
	I0315 23:14:50.643855   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 4/120
	I0315 23:14:51.645898   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 5/120
	I0315 23:14:52.647346   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 6/120
	I0315 23:14:53.649674   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 7/120
	I0315 23:14:54.651202   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 8/120
	I0315 23:14:55.652528   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 9/120
	I0315 23:14:56.654054   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 10/120
	I0315 23:14:57.656087   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 11/120
	I0315 23:14:58.657854   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 12/120
	I0315 23:14:59.659254   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 13/120
	I0315 23:15:00.661153   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 14/120
	I0315 23:15:01.662994   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 15/120
	I0315 23:15:02.664444   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 16/120
	I0315 23:15:03.665667   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 17/120
	I0315 23:15:04.667073   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 18/120
	I0315 23:15:05.668509   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 19/120
	I0315 23:15:06.670358   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 20/120
	I0315 23:15:07.671769   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 21/120
	I0315 23:15:08.673782   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 22/120
	I0315 23:15:09.675663   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 23/120
	I0315 23:15:10.677148   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 24/120
	I0315 23:15:11.678504   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 25/120
	I0315 23:15:12.679858   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 26/120
	I0315 23:15:13.681971   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 27/120
	I0315 23:15:14.683585   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 28/120
	I0315 23:15:15.685805   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 29/120
	I0315 23:15:16.687957   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 30/120
	I0315 23:15:17.689775   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 31/120
	I0315 23:15:18.691214   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 32/120
	I0315 23:15:19.692656   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 33/120
	I0315 23:15:20.693943   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 34/120
	I0315 23:15:21.695948   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 35/120
	I0315 23:15:22.697930   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 36/120
	I0315 23:15:23.699715   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 37/120
	I0315 23:15:24.701961   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 38/120
	I0315 23:15:25.703453   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 39/120
	I0315 23:15:26.705583   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 40/120
	I0315 23:15:27.707119   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 41/120
	I0315 23:15:28.708450   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 42/120
	I0315 23:15:29.709772   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 43/120
	I0315 23:15:30.711726   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 44/120
	I0315 23:15:31.713824   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 45/120
	I0315 23:15:32.715154   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 46/120
	I0315 23:15:33.716992   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 47/120
	I0315 23:15:34.719170   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 48/120
	I0315 23:15:35.720697   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 49/120
	I0315 23:15:36.722662   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 50/120
	I0315 23:15:37.724051   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 51/120
	I0315 23:15:38.725890   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 52/120
	I0315 23:15:39.727375   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 53/120
	I0315 23:15:40.728671   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 54/120
	I0315 23:15:41.730585   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 55/120
	I0315 23:15:42.731945   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 56/120
	I0315 23:15:43.733169   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 57/120
	I0315 23:15:44.734455   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 58/120
	I0315 23:15:45.736050   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 59/120
	I0315 23:15:46.738306   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 60/120
	I0315 23:15:47.740191   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 61/120
	I0315 23:15:48.741558   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 62/120
	I0315 23:15:49.743892   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 63/120
	I0315 23:15:50.745807   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 64/120
	I0315 23:15:51.747868   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 65/120
	I0315 23:15:52.749229   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 66/120
	I0315 23:15:53.751139   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 67/120
	I0315 23:15:54.752708   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 68/120
	I0315 23:15:55.754046   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 69/120
	I0315 23:15:56.756210   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 70/120
	I0315 23:15:57.757990   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 71/120
	I0315 23:15:58.759190   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 72/120
	I0315 23:15:59.760710   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 73/120
	I0315 23:16:00.761946   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 74/120
	I0315 23:16:01.763515   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 75/120
	I0315 23:16:02.765921   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 76/120
	I0315 23:16:03.768224   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 77/120
	I0315 23:16:04.769816   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 78/120
	I0315 23:16:05.771073   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 79/120
	I0315 23:16:06.772995   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 80/120
	I0315 23:16:07.775623   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 81/120
	I0315 23:16:08.777722   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 82/120
	I0315 23:16:09.779414   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 83/120
	I0315 23:16:10.780734   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 84/120
	I0315 23:16:11.782525   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 85/120
	I0315 23:16:12.784010   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 86/120
	I0315 23:16:13.785759   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 87/120
	I0315 23:16:14.787129   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 88/120
	I0315 23:16:15.788986   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 89/120
	I0315 23:16:16.791186   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 90/120
	I0315 23:16:17.792607   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 91/120
	I0315 23:16:18.794393   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 92/120
	I0315 23:16:19.795831   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 93/120
	I0315 23:16:20.798062   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 94/120
	I0315 23:16:21.800077   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 95/120
	I0315 23:16:22.801784   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 96/120
	I0315 23:16:23.803127   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 97/120
	I0315 23:16:24.805204   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 98/120
	I0315 23:16:25.806473   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 99/120
	I0315 23:16:26.808649   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 100/120
	I0315 23:16:27.809912   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 101/120
	I0315 23:16:28.811560   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 102/120
	I0315 23:16:29.813737   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 103/120
	I0315 23:16:30.815703   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 104/120
	I0315 23:16:31.817603   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 105/120
	I0315 23:16:32.818957   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 106/120
	I0315 23:16:33.820468   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 107/120
	I0315 23:16:34.822273   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 108/120
	I0315 23:16:35.823831   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 109/120
	I0315 23:16:36.825634   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 110/120
	I0315 23:16:37.827805   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 111/120
	I0315 23:16:38.829250   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 112/120
	I0315 23:16:39.830749   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 113/120
	I0315 23:16:40.832149   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 114/120
	I0315 23:16:41.833826   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 115/120
	I0315 23:16:42.835257   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 116/120
	I0315 23:16:43.836733   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 117/120
	I0315 23:16:44.838821   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 118/120
	I0315 23:16:45.840257   95817 main.go:141] libmachine: (ha-285481-m02) Waiting for machine to stop 119/120
	I0315 23:16:46.841581   95817 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 23:16:46.841818   95817 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-285481 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
E0315 23:16:52.246239   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (19.205102406s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:16:46.902158   96137 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:16:46.902419   96137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:16:46.902429   96137 out.go:304] Setting ErrFile to fd 2...
	I0315 23:16:46.902433   96137 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:16:46.902608   96137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:16:46.902772   96137 out.go:298] Setting JSON to false
	I0315 23:16:46.902811   96137 mustload.go:65] Loading cluster: ha-285481
	I0315 23:16:46.902915   96137 notify.go:220] Checking for updates...
	I0315 23:16:46.903257   96137 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:16:46.903276   96137 status.go:255] checking status of ha-285481 ...
	I0315 23:16:46.903758   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:16:46.903823   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:16:46.926275   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0315 23:16:46.926665   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:16:46.927303   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:16:46.927351   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:16:46.927736   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:16:46.927960   96137 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:16:46.929571   96137 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:16:46.929593   96137 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:16:46.929899   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:16:46.929954   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:16:46.944182   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39867
	I0315 23:16:46.944544   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:16:46.945024   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:16:46.945047   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:16:46.945381   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:16:46.945574   96137 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:16:46.948333   96137 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:16:46.948850   96137 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:16:46.948885   96137 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:16:46.949066   96137 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:16:46.949369   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:16:46.949406   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:16:46.964069   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35337
	I0315 23:16:46.964509   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:16:46.964955   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:16:46.964991   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:16:46.965313   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:16:46.965494   96137 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:16:46.965687   96137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:16:46.965712   96137 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:16:46.968365   96137 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:16:46.968849   96137 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:16:46.968873   96137 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:16:46.969081   96137 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:16:46.969276   96137 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:16:46.969424   96137 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:16:46.969577   96137 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:16:47.053775   96137 ssh_runner.go:195] Run: systemctl --version
	I0315 23:16:47.062119   96137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:16:47.078482   96137 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:16:47.078507   96137 api_server.go:166] Checking apiserver status ...
	I0315 23:16:47.078536   96137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:16:47.093225   96137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:16:47.103804   96137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:16:47.103851   96137 ssh_runner.go:195] Run: ls
	I0315 23:16:47.108574   96137 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:16:47.113704   96137 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:16:47.113729   96137 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:16:47.113740   96137 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:16:47.113759   96137 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:16:47.114135   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:16:47.114190   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:16:47.129250   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35303
	I0315 23:16:47.129780   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:16:47.130336   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:16:47.130359   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:16:47.130672   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:16:47.130864   96137 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:16:47.132605   96137 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:16:47.132621   96137 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:16:47.132956   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:16:47.133002   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:16:47.147644   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34801
	I0315 23:16:47.148155   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:16:47.148638   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:16:47.148656   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:16:47.148922   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:16:47.149111   96137 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:16:47.151544   96137 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:16:47.151938   96137 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:16:47.151981   96137 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:16:47.152181   96137 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:16:47.152448   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:16:47.152491   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:16:47.166470   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32983
	I0315 23:16:47.166884   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:16:47.167307   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:16:47.167345   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:16:47.167645   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:16:47.167857   96137 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:16:47.168041   96137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:16:47.168061   96137 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:16:47.170488   96137 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:16:47.170828   96137 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:16:47.170857   96137 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:16:47.171013   96137 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:16:47.171219   96137 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:16:47.171533   96137 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:16:47.171672   96137 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:05.659595   96137 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:05.659719   96137 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:05.659743   96137 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:05.659766   96137 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:05.659794   96137 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:05.659802   96137 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:05.660184   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:05.660263   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:05.675125   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35901
	I0315 23:17:05.675681   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:05.676208   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:17:05.676233   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:05.676542   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:05.676730   96137 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:05.678549   96137 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:05.678583   96137 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:05.678930   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:05.678968   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:05.695165   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39313
	I0315 23:17:05.695604   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:05.696070   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:17:05.696096   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:05.696449   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:05.696658   96137 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:05.699489   96137 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:05.699862   96137 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:05.699887   96137 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:05.700061   96137 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:05.700359   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:05.700394   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:05.714660   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37803
	I0315 23:17:05.715161   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:05.715667   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:17:05.715687   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:05.715987   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:05.716200   96137 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:05.716378   96137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:05.716398   96137 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:05.719076   96137 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:05.719570   96137 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:05.719608   96137 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:05.719851   96137 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:05.720035   96137 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:05.720225   96137 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:05.720374   96137 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:05.809259   96137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:05.828494   96137 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:05.828528   96137 api_server.go:166] Checking apiserver status ...
	I0315 23:17:05.828585   96137 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:05.848968   96137 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:05.859441   96137 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:05.859521   96137 ssh_runner.go:195] Run: ls
	I0315 23:17:05.864353   96137 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:05.871239   96137 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:05.871263   96137 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:05.871274   96137 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:05.871298   96137 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:05.871697   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:05.871745   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:05.886884   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33383
	I0315 23:17:05.887337   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:05.887809   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:17:05.887831   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:05.888185   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:05.888386   96137 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:05.889974   96137 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:05.889993   96137 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:05.890261   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:05.890303   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:05.905478   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39435
	I0315 23:17:05.905894   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:05.906369   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:17:05.906392   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:05.906688   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:05.906876   96137 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:05.909843   96137 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:05.910296   96137 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:05.910339   96137 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:05.910516   96137 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:05.910925   96137 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:05.911004   96137 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:05.928379   96137 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45955
	I0315 23:17:05.928835   96137 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:05.929377   96137 main.go:141] libmachine: Using API Version  1
	I0315 23:17:05.929403   96137 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:05.929786   96137 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:05.929991   96137 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:05.930202   96137 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:05.930230   96137 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:05.933296   96137 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:05.933874   96137 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:05.933910   96137 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:05.934084   96137 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:05.934290   96137 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:05.934453   96137 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:05.934612   96137 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:06.024828   96137 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:06.045944   96137 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-285481 -n ha-285481
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-285481 logs -n 25: (1.535057187s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481:/home/docker/cp-test_ha-285481-m03_ha-285481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481 sudo cat                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m04 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp testdata/cp-test.txt                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481:/home/docker/cp-test_ha-285481-m04_ha-285481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481 sudo cat                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03:/home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m03 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-285481 node stop m02 -v=7                                                     | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 23:09:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 23:09:55.829425   92071 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:09:55.829892   92071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:55.829911   92071 out.go:304] Setting ErrFile to fd 2...
	I0315 23:09:55.829918   92071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:55.830376   92071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:09:55.831360   92071 out.go:298] Setting JSON to false
	I0315 23:09:55.832277   92071 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6746,"bootTime":1710537450,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:09:55.832345   92071 start.go:139] virtualization: kvm guest
	I0315 23:09:55.834345   92071 out.go:177] * [ha-285481] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:09:55.835694   92071 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:09:55.835735   92071 notify.go:220] Checking for updates...
	I0315 23:09:55.836938   92071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:09:55.838167   92071 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:09:55.839539   92071 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:55.840906   92071 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:09:55.842290   92071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:09:55.843777   92071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:09:55.877928   92071 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 23:09:55.879144   92071 start.go:297] selected driver: kvm2
	I0315 23:09:55.879164   92071 start.go:901] validating driver "kvm2" against <nil>
	I0315 23:09:55.879176   92071 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:09:55.879928   92071 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:09:55.880022   92071 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:09:55.894520   92071 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:09:55.894572   92071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 23:09:55.894762   92071 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:09:55.894823   92071 cni.go:84] Creating CNI manager for ""
	I0315 23:09:55.894836   92071 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0315 23:09:55.894840   92071 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 23:09:55.894890   92071 start.go:340] cluster config:
	{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0315 23:09:55.895006   92071 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:09:55.896676   92071 out.go:177] * Starting "ha-285481" primary control-plane node in "ha-285481" cluster
	I0315 23:09:55.897810   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:09:55.897836   92071 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 23:09:55.897843   92071 cache.go:56] Caching tarball of preloaded images
	I0315 23:09:55.897913   92071 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:09:55.897923   92071 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:09:55.898203   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:09:55.898221   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json: {Name:mkaa91889e299a827fa98bd8233aee91a275a9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:09:55.898345   92071 start.go:360] acquireMachinesLock for ha-285481: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:09:55.898371   92071 start.go:364] duration metric: took 13.866µs to acquireMachinesLock for "ha-285481"
	I0315 23:09:55.898387   92071 start.go:93] Provisioning new machine with config: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:09:55.898436   92071 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 23:09:55.900023   92071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:09:55.900136   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:09:55.900169   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:09:55.913773   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0315 23:09:55.914175   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:09:55.914696   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:09:55.914717   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:09:55.915065   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:09:55.915244   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:09:55.915397   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:09:55.915526   92071 start.go:159] libmachine.API.Create for "ha-285481" (driver="kvm2")
	I0315 23:09:55.915561   92071 client.go:168] LocalClient.Create starting
	I0315 23:09:55.915594   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:09:55.915627   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:09:55.915643   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:09:55.915694   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:09:55.915712   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:09:55.915735   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:09:55.915754   92071 main.go:141] libmachine: Running pre-create checks...
	I0315 23:09:55.915766   92071 main.go:141] libmachine: (ha-285481) Calling .PreCreateCheck
	I0315 23:09:55.916074   92071 main.go:141] libmachine: (ha-285481) Calling .GetConfigRaw
	I0315 23:09:55.916392   92071 main.go:141] libmachine: Creating machine...
	I0315 23:09:55.916404   92071 main.go:141] libmachine: (ha-285481) Calling .Create
	I0315 23:09:55.916528   92071 main.go:141] libmachine: (ha-285481) Creating KVM machine...
	I0315 23:09:55.917654   92071 main.go:141] libmachine: (ha-285481) DBG | found existing default KVM network
	I0315 23:09:55.918345   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:55.918228   92093 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0315 23:09:55.918403   92071 main.go:141] libmachine: (ha-285481) DBG | created network xml: 
	I0315 23:09:55.918424   92071 main.go:141] libmachine: (ha-285481) DBG | <network>
	I0315 23:09:55.918432   92071 main.go:141] libmachine: (ha-285481) DBG |   <name>mk-ha-285481</name>
	I0315 23:09:55.918442   92071 main.go:141] libmachine: (ha-285481) DBG |   <dns enable='no'/>
	I0315 23:09:55.918466   92071 main.go:141] libmachine: (ha-285481) DBG |   
	I0315 23:09:55.918481   92071 main.go:141] libmachine: (ha-285481) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 23:09:55.918490   92071 main.go:141] libmachine: (ha-285481) DBG |     <dhcp>
	I0315 23:09:55.918502   92071 main.go:141] libmachine: (ha-285481) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 23:09:55.918520   92071 main.go:141] libmachine: (ha-285481) DBG |     </dhcp>
	I0315 23:09:55.918529   92071 main.go:141] libmachine: (ha-285481) DBG |   </ip>
	I0315 23:09:55.918573   92071 main.go:141] libmachine: (ha-285481) DBG |   
	I0315 23:09:55.918598   92071 main.go:141] libmachine: (ha-285481) DBG | </network>
	I0315 23:09:55.918610   92071 main.go:141] libmachine: (ha-285481) DBG | 
	I0315 23:09:55.923112   92071 main.go:141] libmachine: (ha-285481) DBG | trying to create private KVM network mk-ha-285481 192.168.39.0/24...
	I0315 23:09:55.994643   92071 main.go:141] libmachine: (ha-285481) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481 ...
	I0315 23:09:55.994672   92071 main.go:141] libmachine: (ha-285481) DBG | private KVM network mk-ha-285481 192.168.39.0/24 created
	I0315 23:09:55.994691   92071 main.go:141] libmachine: (ha-285481) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:09:55.994719   92071 main.go:141] libmachine: (ha-285481) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:09:55.994736   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:55.994570   92093 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:56.241606   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:56.241464   92093 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa...
	I0315 23:09:56.279521   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:56.279414   92093 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/ha-285481.rawdisk...
	I0315 23:09:56.279549   92071 main.go:141] libmachine: (ha-285481) DBG | Writing magic tar header
	I0315 23:09:56.279559   92071 main.go:141] libmachine: (ha-285481) DBG | Writing SSH key tar header
	I0315 23:09:56.279566   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:56.279532   92093 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481 ...
	I0315 23:09:56.279703   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481
	I0315 23:09:56.279728   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:09:56.279741   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481 (perms=drwx------)
	I0315 23:09:56.279757   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:09:56.279768   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:09:56.279784   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:09:56.279797   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:09:56.279817   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:09:56.279830   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:56.279845   92071 main.go:141] libmachine: (ha-285481) Creating domain...
	I0315 23:09:56.279859   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:09:56.279874   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:09:56.279885   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:09:56.279896   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home
	I0315 23:09:56.279905   92071 main.go:141] libmachine: (ha-285481) DBG | Skipping /home - not owner
	I0315 23:09:56.280884   92071 main.go:141] libmachine: (ha-285481) define libvirt domain using xml: 
	I0315 23:09:56.280905   92071 main.go:141] libmachine: (ha-285481) <domain type='kvm'>
	I0315 23:09:56.280911   92071 main.go:141] libmachine: (ha-285481)   <name>ha-285481</name>
	I0315 23:09:56.280920   92071 main.go:141] libmachine: (ha-285481)   <memory unit='MiB'>2200</memory>
	I0315 23:09:56.280928   92071 main.go:141] libmachine: (ha-285481)   <vcpu>2</vcpu>
	I0315 23:09:56.280934   92071 main.go:141] libmachine: (ha-285481)   <features>
	I0315 23:09:56.280942   92071 main.go:141] libmachine: (ha-285481)     <acpi/>
	I0315 23:09:56.280948   92071 main.go:141] libmachine: (ha-285481)     <apic/>
	I0315 23:09:56.280964   92071 main.go:141] libmachine: (ha-285481)     <pae/>
	I0315 23:09:56.280969   92071 main.go:141] libmachine: (ha-285481)     
	I0315 23:09:56.280974   92071 main.go:141] libmachine: (ha-285481)   </features>
	I0315 23:09:56.280979   92071 main.go:141] libmachine: (ha-285481)   <cpu mode='host-passthrough'>
	I0315 23:09:56.280983   92071 main.go:141] libmachine: (ha-285481)   
	I0315 23:09:56.280992   92071 main.go:141] libmachine: (ha-285481)   </cpu>
	I0315 23:09:56.281017   92071 main.go:141] libmachine: (ha-285481)   <os>
	I0315 23:09:56.281041   92071 main.go:141] libmachine: (ha-285481)     <type>hvm</type>
	I0315 23:09:56.281051   92071 main.go:141] libmachine: (ha-285481)     <boot dev='cdrom'/>
	I0315 23:09:56.281071   92071 main.go:141] libmachine: (ha-285481)     <boot dev='hd'/>
	I0315 23:09:56.281085   92071 main.go:141] libmachine: (ha-285481)     <bootmenu enable='no'/>
	I0315 23:09:56.281095   92071 main.go:141] libmachine: (ha-285481)   </os>
	I0315 23:09:56.281106   92071 main.go:141] libmachine: (ha-285481)   <devices>
	I0315 23:09:56.281117   92071 main.go:141] libmachine: (ha-285481)     <disk type='file' device='cdrom'>
	I0315 23:09:56.281134   92071 main.go:141] libmachine: (ha-285481)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/boot2docker.iso'/>
	I0315 23:09:56.281149   92071 main.go:141] libmachine: (ha-285481)       <target dev='hdc' bus='scsi'/>
	I0315 23:09:56.281163   92071 main.go:141] libmachine: (ha-285481)       <readonly/>
	I0315 23:09:56.281173   92071 main.go:141] libmachine: (ha-285481)     </disk>
	I0315 23:09:56.281185   92071 main.go:141] libmachine: (ha-285481)     <disk type='file' device='disk'>
	I0315 23:09:56.281197   92071 main.go:141] libmachine: (ha-285481)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:09:56.281212   92071 main.go:141] libmachine: (ha-285481)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/ha-285481.rawdisk'/>
	I0315 23:09:56.281227   92071 main.go:141] libmachine: (ha-285481)       <target dev='hda' bus='virtio'/>
	I0315 23:09:56.281239   92071 main.go:141] libmachine: (ha-285481)     </disk>
	I0315 23:09:56.281247   92071 main.go:141] libmachine: (ha-285481)     <interface type='network'>
	I0315 23:09:56.281259   92071 main.go:141] libmachine: (ha-285481)       <source network='mk-ha-285481'/>
	I0315 23:09:56.281267   92071 main.go:141] libmachine: (ha-285481)       <model type='virtio'/>
	I0315 23:09:56.281278   92071 main.go:141] libmachine: (ha-285481)     </interface>
	I0315 23:09:56.281289   92071 main.go:141] libmachine: (ha-285481)     <interface type='network'>
	I0315 23:09:56.281308   92071 main.go:141] libmachine: (ha-285481)       <source network='default'/>
	I0315 23:09:56.281332   92071 main.go:141] libmachine: (ha-285481)       <model type='virtio'/>
	I0315 23:09:56.281344   92071 main.go:141] libmachine: (ha-285481)     </interface>
	I0315 23:09:56.281354   92071 main.go:141] libmachine: (ha-285481)     <serial type='pty'>
	I0315 23:09:56.281367   92071 main.go:141] libmachine: (ha-285481)       <target port='0'/>
	I0315 23:09:56.281381   92071 main.go:141] libmachine: (ha-285481)     </serial>
	I0315 23:09:56.281394   92071 main.go:141] libmachine: (ha-285481)     <console type='pty'>
	I0315 23:09:56.281405   92071 main.go:141] libmachine: (ha-285481)       <target type='serial' port='0'/>
	I0315 23:09:56.281419   92071 main.go:141] libmachine: (ha-285481)     </console>
	I0315 23:09:56.281428   92071 main.go:141] libmachine: (ha-285481)     <rng model='virtio'>
	I0315 23:09:56.281436   92071 main.go:141] libmachine: (ha-285481)       <backend model='random'>/dev/random</backend>
	I0315 23:09:56.281446   92071 main.go:141] libmachine: (ha-285481)     </rng>
	I0315 23:09:56.281460   92071 main.go:141] libmachine: (ha-285481)     
	I0315 23:09:56.281471   92071 main.go:141] libmachine: (ha-285481)     
	I0315 23:09:56.281481   92071 main.go:141] libmachine: (ha-285481)   </devices>
	I0315 23:09:56.281492   92071 main.go:141] libmachine: (ha-285481) </domain>
	I0315 23:09:56.281501   92071 main.go:141] libmachine: (ha-285481) 
	I0315 23:09:56.285700   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:db:c7:8c in network default
	I0315 23:09:56.286236   92071 main.go:141] libmachine: (ha-285481) Ensuring networks are active...
	I0315 23:09:56.286255   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:56.286909   92071 main.go:141] libmachine: (ha-285481) Ensuring network default is active
	I0315 23:09:56.287292   92071 main.go:141] libmachine: (ha-285481) Ensuring network mk-ha-285481 is active
	I0315 23:09:56.287861   92071 main.go:141] libmachine: (ha-285481) Getting domain xml...
	I0315 23:09:56.288631   92071 main.go:141] libmachine: (ha-285481) Creating domain...
	I0315 23:09:57.454593   92071 main.go:141] libmachine: (ha-285481) Waiting to get IP...
	I0315 23:09:57.455445   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:57.455860   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:57.455920   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:57.455859   92093 retry.go:31] will retry after 303.440345ms: waiting for machine to come up
	I0315 23:09:57.761405   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:57.761884   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:57.761915   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:57.761840   92093 retry.go:31] will retry after 353.723834ms: waiting for machine to come up
	I0315 23:09:58.117512   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:58.117940   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:58.117961   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:58.117876   92093 retry.go:31] will retry after 425.710423ms: waiting for machine to come up
	I0315 23:09:58.545353   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:58.545839   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:58.545867   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:58.545786   92093 retry.go:31] will retry after 592.484289ms: waiting for machine to come up
	I0315 23:09:59.139667   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:59.140172   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:59.140211   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:59.140142   92093 retry.go:31] will retry after 656.027969ms: waiting for machine to come up
	I0315 23:09:59.797914   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:59.798347   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:59.798376   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:59.798294   92093 retry.go:31] will retry after 647.178612ms: waiting for machine to come up
	I0315 23:10:00.447161   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:00.447598   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:00.447636   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:00.447542   92093 retry.go:31] will retry after 1.030593597s: waiting for machine to come up
	I0315 23:10:01.479515   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:01.479916   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:01.479972   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:01.479896   92093 retry.go:31] will retry after 1.239485655s: waiting for machine to come up
	I0315 23:10:02.720509   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:02.720970   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:02.721000   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:02.720900   92093 retry.go:31] will retry after 1.308366089s: waiting for machine to come up
	I0315 23:10:04.031407   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:04.031731   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:04.031757   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:04.031709   92093 retry.go:31] will retry after 2.03239829s: waiting for machine to come up
	I0315 23:10:06.065771   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:06.066130   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:06.066178   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:06.066092   92093 retry.go:31] will retry after 2.159259052s: waiting for machine to come up
	I0315 23:10:08.228491   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:08.228961   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:08.228989   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:08.228908   92093 retry.go:31] will retry after 2.816344286s: waiting for machine to come up
	I0315 23:10:11.047182   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:11.047578   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:11.047607   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:11.047526   92093 retry.go:31] will retry after 3.09430771s: waiting for machine to come up
	I0315 23:10:14.145796   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:14.146239   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:14.146270   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:14.146194   92093 retry.go:31] will retry after 5.256327871s: waiting for machine to come up
	I0315 23:10:19.406569   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.407105   92071 main.go:141] libmachine: (ha-285481) Found IP for machine: 192.168.39.23
	I0315 23:10:19.407134   92071 main.go:141] libmachine: (ha-285481) Reserving static IP address...
	I0315 23:10:19.407147   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has current primary IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.407624   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find host DHCP lease matching {name: "ha-285481", mac: "52:54:00:b7:7a:0e", ip: "192.168.39.23"} in network mk-ha-285481
	I0315 23:10:19.481094   92071 main.go:141] libmachine: (ha-285481) DBG | Getting to WaitForSSH function...
	I0315 23:10:19.481132   92071 main.go:141] libmachine: (ha-285481) Reserved static IP address: 192.168.39.23
	I0315 23:10:19.481145   92071 main.go:141] libmachine: (ha-285481) Waiting for SSH to be available...
	I0315 23:10:19.483843   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.484309   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.484336   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.484495   92071 main.go:141] libmachine: (ha-285481) DBG | Using SSH client type: external
	I0315 23:10:19.484528   92071 main.go:141] libmachine: (ha-285481) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa (-rw-------)
	I0315 23:10:19.484582   92071 main.go:141] libmachine: (ha-285481) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 23:10:19.484598   92071 main.go:141] libmachine: (ha-285481) DBG | About to run SSH command:
	I0315 23:10:19.484637   92071 main.go:141] libmachine: (ha-285481) DBG | exit 0
	I0315 23:10:19.607423   92071 main.go:141] libmachine: (ha-285481) DBG | SSH cmd err, output: <nil>: 
	I0315 23:10:19.607745   92071 main.go:141] libmachine: (ha-285481) KVM machine creation complete!
	I0315 23:10:19.608197   92071 main.go:141] libmachine: (ha-285481) Calling .GetConfigRaw
	I0315 23:10:19.608743   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:19.608914   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:19.609046   92071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 23:10:19.609056   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:19.610303   92071 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 23:10:19.610320   92071 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 23:10:19.610325   92071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 23:10:19.610341   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.612773   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.613134   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.613162   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.613296   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.613505   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.613672   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.613810   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.613987   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.614200   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.614214   92071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 23:10:19.710588   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:10:19.710617   92071 main.go:141] libmachine: Detecting the provisioner...
	I0315 23:10:19.710626   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.713746   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.714059   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.714092   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.714252   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.714433   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.714623   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.714772   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.714951   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.715170   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.715186   92071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 23:10:19.816386   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 23:10:19.816464   92071 main.go:141] libmachine: found compatible host: buildroot
	I0315 23:10:19.816482   92071 main.go:141] libmachine: Provisioning with buildroot...
	I0315 23:10:19.816491   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:10:19.816804   92071 buildroot.go:166] provisioning hostname "ha-285481"
	I0315 23:10:19.816834   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:10:19.817029   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.819775   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.820151   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.820179   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.820410   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.820602   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.820759   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.820916   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.821115   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.821292   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.821304   92071 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481 && echo "ha-285481" | sudo tee /etc/hostname
	I0315 23:10:19.932724   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481
	
	I0315 23:10:19.932748   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.935563   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.935915   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.935952   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.936178   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.936435   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.936621   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.936801   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.937033   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.937206   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.937221   92071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:10:20.045028   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:10:20.045056   92071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:10:20.045098   92071 buildroot.go:174] setting up certificates
	I0315 23:10:20.045111   92071 provision.go:84] configureAuth start
	I0315 23:10:20.045121   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:10:20.045441   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:20.048493   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.048847   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.048873   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.049038   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.051186   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.051594   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.051617   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.051770   92071 provision.go:143] copyHostCerts
	I0315 23:10:20.051814   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:10:20.051849   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:10:20.051858   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:10:20.051923   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:10:20.052019   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:10:20.052038   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:10:20.052045   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:10:20.052077   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:10:20.052124   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:10:20.052141   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:10:20.052147   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:10:20.052166   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:10:20.052209   92071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481 san=[127.0.0.1 192.168.39.23 ha-285481 localhost minikube]
	I0315 23:10:20.169384   92071 provision.go:177] copyRemoteCerts
	I0315 23:10:20.169453   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:10:20.169478   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.172180   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.172464   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.172503   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.172653   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.172835   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.172977   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.173128   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.254138   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:10:20.254208   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:10:20.279254   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:10:20.279331   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0315 23:10:20.310178   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:10:20.310236   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:10:20.335032   92071 provision.go:87] duration metric: took 289.890096ms to configureAuth
	I0315 23:10:20.335071   92071 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:10:20.335299   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:10:20.335415   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.338003   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.338364   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.338388   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.338612   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.338796   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.338935   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.339043   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.339242   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:20.339444   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:20.339460   92071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:10:20.596446   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:10:20.596471   92071 main.go:141] libmachine: Checking connection to Docker...
	I0315 23:10:20.596479   92071 main.go:141] libmachine: (ha-285481) Calling .GetURL
	I0315 23:10:20.597925   92071 main.go:141] libmachine: (ha-285481) DBG | Using libvirt version 6000000
	I0315 23:10:20.600348   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.600732   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.600758   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.601068   92071 main.go:141] libmachine: Docker is up and running!
	I0315 23:10:20.601085   92071 main.go:141] libmachine: Reticulating splines...
	I0315 23:10:20.601093   92071 client.go:171] duration metric: took 24.685520422s to LocalClient.Create
	I0315 23:10:20.601115   92071 start.go:167] duration metric: took 24.685590841s to libmachine.API.Create "ha-285481"
	I0315 23:10:20.601129   92071 start.go:293] postStartSetup for "ha-285481" (driver="kvm2")
	I0315 23:10:20.601142   92071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:10:20.601165   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.601427   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:10:20.601451   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.603810   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.604189   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.604217   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.604380   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.604571   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.604815   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.604994   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.686497   92071 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:10:20.691241   92071 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:10:20.691266   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:10:20.691341   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:10:20.691434   92071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:10:20.691450   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:10:20.691584   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:10:20.701133   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:10:20.726838   92071 start.go:296] duration metric: took 125.695353ms for postStartSetup
	I0315 23:10:20.726911   92071 main.go:141] libmachine: (ha-285481) Calling .GetConfigRaw
	I0315 23:10:20.727477   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:20.730235   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.730709   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.730741   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.731002   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:10:20.731283   92071 start.go:128] duration metric: took 24.832834817s to createHost
	I0315 23:10:20.731346   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.733616   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.733937   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.733965   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.734066   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.734236   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.734383   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.734498   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.734684   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:20.734902   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:20.734920   92071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:10:20.836271   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544220.811161677
	
	I0315 23:10:20.836294   92071 fix.go:216] guest clock: 1710544220.811161677
	I0315 23:10:20.836301   92071 fix.go:229] Guest: 2024-03-15 23:10:20.811161677 +0000 UTC Remote: 2024-03-15 23:10:20.731302898 +0000 UTC m=+24.949631004 (delta=79.858779ms)
	I0315 23:10:20.836320   92071 fix.go:200] guest clock delta is within tolerance: 79.858779ms
	I0315 23:10:20.836325   92071 start.go:83] releasing machines lock for "ha-285481", held for 24.937945305s
	I0315 23:10:20.836341   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.836653   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:20.839376   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.839760   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.839784   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.839935   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.840574   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.840847   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.840976   92071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:10:20.841032   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.841088   92071 ssh_runner.go:195] Run: cat /version.json
	I0315 23:10:20.841118   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.843599   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.843990   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.844036   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.844404   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.844621   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.845134   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.845300   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.845459   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.845548   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.845580   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.845787   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.845975   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.846141   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.846294   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.920798   92071 ssh_runner.go:195] Run: systemctl --version
	I0315 23:10:20.939655   92071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:10:21.103285   92071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:10:21.109211   92071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:10:21.109277   92071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:10:21.126535   92071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 23:10:21.126563   92071 start.go:494] detecting cgroup driver to use...
	I0315 23:10:21.126620   92071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:10:21.143540   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:10:21.158524   92071 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:10:21.158600   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:10:21.173250   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:10:21.187931   92071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:10:21.304391   92071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:10:21.453889   92071 docker.go:233] disabling docker service ...
	I0315 23:10:21.453952   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:10:21.470128   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:10:21.484143   92071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:10:21.612833   92071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:10:21.730896   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:10:21.745268   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:10:21.763756   92071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:10:21.763801   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.774440   92071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:10:21.774515   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.785161   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.795957   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.807171   92071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:10:21.818498   92071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:10:21.828168   92071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 23:10:21.828228   92071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 23:10:21.841560   92071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:10:21.851376   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:10:21.966238   92071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:10:22.106233   92071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:10:22.106331   92071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:10:22.111335   92071 start.go:562] Will wait 60s for crictl version
	I0315 23:10:22.111405   92071 ssh_runner.go:195] Run: which crictl
	I0315 23:10:22.115431   92071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:10:22.151571   92071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:10:22.151661   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:10:22.181391   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:10:22.212534   92071 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:10:22.213905   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:22.216786   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:22.217200   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:22.217226   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:22.217396   92071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:10:22.221797   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:10:22.235310   92071 kubeadm.go:877] updating cluster {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 23:10:22.235457   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:10:22.235530   92071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:10:22.271196   92071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 23:10:22.271272   92071 ssh_runner.go:195] Run: which lz4
	I0315 23:10:22.275662   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0315 23:10:22.275745   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 23:10:22.280159   92071 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 23:10:22.280183   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 23:10:23.946320   92071 crio.go:444] duration metric: took 1.670597717s to copy over tarball
	I0315 23:10:23.946382   92071 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 23:10:26.362268   92071 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415857165s)
	I0315 23:10:26.362308   92071 crio.go:451] duration metric: took 2.415961561s to extract the tarball
	I0315 23:10:26.362325   92071 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 23:10:26.403571   92071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:10:26.462157   92071 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:10:26.462179   92071 cache_images.go:84] Images are preloaded, skipping loading
	I0315 23:10:26.462188   92071 kubeadm.go:928] updating node { 192.168.39.23 8443 v1.28.4 crio true true} ...
	I0315 23:10:26.462317   92071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:10:26.462382   92071 ssh_runner.go:195] Run: crio config
	I0315 23:10:26.520625   92071 cni.go:84] Creating CNI manager for ""
	I0315 23:10:26.520660   92071 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 23:10:26.520679   92071 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 23:10:26.520708   92071 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-285481 NodeName:ha-285481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 23:10:26.520882   92071 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-285481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 23:10:26.520916   92071 kube-vip.go:111] generating kube-vip config ...
	I0315 23:10:26.520969   92071 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:10:26.543023   92071 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:10:26.543165   92071 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:10:26.543227   92071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:10:26.560611   92071 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 23:10:26.560701   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 23:10:26.576707   92071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 23:10:26.595529   92071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:10:26.614498   92071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 23:10:26.633442   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:10:26.652735   92071 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:10:26.657112   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:10:26.670904   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:10:26.805490   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:10:26.824183   92071 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.23
	I0315 23:10:26.824213   92071 certs.go:194] generating shared ca certs ...
	I0315 23:10:26.824245   92071 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:26.824451   92071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:10:26.824519   92071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:10:26.824536   92071 certs.go:256] generating profile certs ...
	I0315 23:10:26.824608   92071 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:10:26.824639   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt with IP's: []
	I0315 23:10:26.980160   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt ...
	I0315 23:10:26.980192   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt: {Name:mk1c5048a214d2dced4203732d39a9764f6dbaea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:26.980376   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key ...
	I0315 23:10:26.980393   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key: {Name:mka52854b81f06993ecaf7335cb216481234bb75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:26.980505   92071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1
	I0315 23:10:26.980528   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.254]
	I0315 23:10:27.243461   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1 ...
	I0315 23:10:27.243497   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1: {Name:mkbb04bbe69628bfaf0244064cb50aa428de2a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.243668   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1 ...
	I0315 23:10:27.243683   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1: {Name:mk4aef05ec6f5326a9ced309014d7fc8e63afdaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.243779   92071 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:10:27.243865   92071 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:10:27.243932   92071 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:10:27.243959   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt with IP's: []
	I0315 23:10:27.446335   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt ...
	I0315 23:10:27.446368   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt: {Name:mkd5ef2f928a3f3f8755be3f9f58bef8a980c22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.446534   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key ...
	I0315 23:10:27.446545   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key: {Name:mkcb6e466047f30218160eb49e0092e4e744f66e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.446618   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:10:27.446639   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:10:27.446653   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:10:27.446671   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:10:27.446685   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:10:27.446699   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:10:27.446711   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:10:27.446721   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:10:27.446770   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:10:27.446802   92071 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:10:27.446812   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:10:27.446831   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:10:27.446853   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:10:27.446878   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:10:27.446916   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:10:27.446947   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.446960   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.446973   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.447580   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:10:27.478347   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:10:27.503643   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:10:27.528078   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:10:27.552558   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 23:10:27.578130   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:10:27.605290   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:10:27.634366   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:10:27.657942   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:10:27.711571   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:10:27.736341   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:10:27.760213   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 23:10:27.777516   92071 ssh_runner.go:195] Run: openssl version
	I0315 23:10:27.783452   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:10:27.795425   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.800173   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.800241   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.806206   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:10:27.816912   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:10:27.828583   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.833719   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.833778   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.839840   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:10:27.850988   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:10:27.861881   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.866445   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.866516   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.872427   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:10:27.883577   92071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:10:27.887936   92071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 23:10:27.887996   92071 kubeadm.go:391] StartCluster: {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:10:27.888090   92071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 23:10:27.888144   92071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 23:10:27.927845   92071 cri.go:89] found id: ""
	I0315 23:10:27.927951   92071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 23:10:27.937856   92071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 23:10:27.947451   92071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 23:10:27.957523   92071 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 23:10:27.957543   92071 kubeadm.go:156] found existing configuration files:
	
	I0315 23:10:27.957589   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 23:10:27.967178   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 23:10:27.967226   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 23:10:27.977156   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 23:10:27.986703   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 23:10:27.986757   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 23:10:27.996624   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 23:10:28.006072   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 23:10:28.006145   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 23:10:28.015996   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 23:10:28.025583   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 23:10:28.025644   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 23:10:28.035560   92071 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 23:10:28.128154   92071 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 23:10:28.128234   92071 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 23:10:28.269481   92071 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 23:10:28.269650   92071 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 23:10:28.269773   92071 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 23:10:28.538562   92071 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 23:10:28.699919   92071 out.go:204]   - Generating certificates and keys ...
	I0315 23:10:28.700037   92071 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 23:10:28.700107   92071 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 23:10:28.979659   92071 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 23:10:29.158651   92071 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 23:10:29.272023   92071 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 23:10:29.380691   92071 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 23:10:29.715709   92071 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 23:10:29.716013   92071 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-285481 localhost] and IPs [192.168.39.23 127.0.0.1 ::1]
	I0315 23:10:29.814560   92071 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 23:10:29.814767   92071 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-285481 localhost] and IPs [192.168.39.23 127.0.0.1 ::1]
	I0315 23:10:29.931539   92071 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 23:10:30.053177   92071 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 23:10:30.165551   92071 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 23:10:30.165840   92071 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 23:10:30.411059   92071 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 23:10:30.579213   92071 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 23:10:30.675045   92071 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 23:10:31.193059   92071 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 23:10:31.193642   92071 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 23:10:31.196710   92071 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 23:10:31.198759   92071 out.go:204]   - Booting up control plane ...
	I0315 23:10:31.198896   92071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 23:10:31.199032   92071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 23:10:31.199133   92071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 23:10:31.214859   92071 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 23:10:31.215647   92071 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 23:10:31.215734   92071 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 23:10:31.344670   92071 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 23:10:37.937756   92071 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.595764 seconds
	I0315 23:10:37.937946   92071 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 23:10:37.963162   92071 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 23:10:38.495548   92071 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 23:10:38.495758   92071 kubeadm.go:309] [mark-control-plane] Marking the node ha-285481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 23:10:39.014489   92071 kubeadm.go:309] [bootstrap-token] Using token: wgx4dt.9t39ji7sy70fmhdi
	I0315 23:10:39.016158   92071 out.go:204]   - Configuring RBAC rules ...
	I0315 23:10:39.016290   92071 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 23:10:39.027081   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 23:10:39.035488   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 23:10:39.039125   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 23:10:39.043055   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 23:10:39.047230   92071 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 23:10:39.064125   92071 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 23:10:39.309590   92071 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 23:10:39.437940   92071 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 23:10:39.438770   92071 kubeadm.go:309] 
	I0315 23:10:39.438846   92071 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 23:10:39.438882   92071 kubeadm.go:309] 
	I0315 23:10:39.439007   92071 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 23:10:39.439019   92071 kubeadm.go:309] 
	I0315 23:10:39.439053   92071 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 23:10:39.439147   92071 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 23:10:39.439231   92071 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 23:10:39.439240   92071 kubeadm.go:309] 
	I0315 23:10:39.439303   92071 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 23:10:39.439313   92071 kubeadm.go:309] 
	I0315 23:10:39.439420   92071 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 23:10:39.439431   92071 kubeadm.go:309] 
	I0315 23:10:39.439509   92071 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 23:10:39.439618   92071 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 23:10:39.439716   92071 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 23:10:39.439725   92071 kubeadm.go:309] 
	I0315 23:10:39.439830   92071 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 23:10:39.439951   92071 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 23:10:39.439968   92071 kubeadm.go:309] 
	I0315 23:10:39.440049   92071 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wgx4dt.9t39ji7sy70fmhdi \
	I0315 23:10:39.440156   92071 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0315 23:10:39.440190   92071 kubeadm.go:309] 	--control-plane 
	I0315 23:10:39.440197   92071 kubeadm.go:309] 
	I0315 23:10:39.440270   92071 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 23:10:39.440278   92071 kubeadm.go:309] 
	I0315 23:10:39.440343   92071 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wgx4dt.9t39ji7sy70fmhdi \
	I0315 23:10:39.440467   92071 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0315 23:10:39.441186   92071 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 23:10:39.441304   92071 cni.go:84] Creating CNI manager for ""
	I0315 23:10:39.441324   92071 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 23:10:39.443030   92071 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0315 23:10:39.444360   92071 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0315 23:10:39.452442   92071 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 23:10:39.452470   92071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0315 23:10:39.479017   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 23:10:40.508703   92071 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.029651541s)
	I0315 23:10:40.508761   92071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 23:10:40.508913   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:40.508944   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-285481 minikube.k8s.io/updated_at=2024_03_15T23_10_40_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=ha-285481 minikube.k8s.io/primary=true
	I0315 23:10:40.527710   92071 ops.go:34] apiserver oom_adj: -16
	I0315 23:10:40.699851   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:41.200042   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:41.700032   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:42.200223   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:42.700035   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:43.200672   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:43.700602   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:44.200566   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:44.700883   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:45.200703   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:45.700875   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:46.200551   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:46.699949   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:47.200200   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:47.700707   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:48.200934   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:48.700781   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:49.200840   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:49.700165   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:50.200515   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:50.700009   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:51.200083   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:51.351359   92071 kubeadm.go:1107] duration metric: took 10.842519902s to wait for elevateKubeSystemPrivileges
	W0315 23:10:51.351400   92071 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 23:10:51.351410   92071 kubeadm.go:393] duration metric: took 23.463419886s to StartCluster
	I0315 23:10:51.351433   92071 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:51.351514   92071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:10:51.352223   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:51.352454   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 23:10:51.352480   92071 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 23:10:51.352540   92071 addons.go:69] Setting storage-provisioner=true in profile "ha-285481"
	I0315 23:10:51.352452   92071 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:10:51.352577   92071 addons.go:69] Setting default-storageclass=true in profile "ha-285481"
	I0315 23:10:51.352583   92071 start.go:240] waiting for startup goroutines ...
	I0315 23:10:51.352569   92071 addons.go:234] Setting addon storage-provisioner=true in "ha-285481"
	I0315 23:10:51.352608   92071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-285481"
	I0315 23:10:51.352636   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:10:51.352694   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:10:51.353014   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.353059   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.353089   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.353132   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.368263   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0315 23:10:51.368668   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0315 23:10:51.368801   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.369169   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.369443   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.369466   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.369830   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.369888   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.369896   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.370105   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:51.370231   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.370819   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.370861   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.372650   92071 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:10:51.372921   92071 kapi.go:59] client config for ha-285481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt", KeyFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key", CAFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 23:10:51.373485   92071 cert_rotation.go:137] Starting client certificate rotation controller
	I0315 23:10:51.373693   92071 addons.go:234] Setting addon default-storageclass=true in "ha-285481"
	I0315 23:10:51.373731   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:10:51.373976   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.374028   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.386183   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0315 23:10:51.386714   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.387358   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.387396   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.387789   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.387990   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:51.388337   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0315 23:10:51.388722   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.389237   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.389270   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.389647   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.390010   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:51.390188   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.390235   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.391880   92071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 23:10:51.393694   92071 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 23:10:51.393716   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 23:10:51.393738   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:51.397022   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.397583   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:51.397613   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.397898   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:51.398126   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:51.398316   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:51.398446   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:51.405908   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0315 23:10:51.406318   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.406880   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.406906   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.407217   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.407429   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:51.408981   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:51.409247   92071 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 23:10:51.409266   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 23:10:51.409286   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:51.411992   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.412491   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:51.412516   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.412652   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:51.412869   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:51.413045   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:51.413215   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:51.587070   92071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 23:10:51.593632   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 23:10:51.614888   92071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 23:10:52.653677   92071 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.060002507s)
	I0315 23:10:52.653716   92071 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 23:10:52.653758   92071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.066641759s)
	I0315 23:10:52.653800   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.653801   92071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.038881304s)
	I0315 23:10:52.653839   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.653854   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.653812   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.654155   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.654172   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654182   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.654186   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654196   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.654197   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654204   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.654209   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654218   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.654226   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.654427   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654444   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654462   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.654518   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654528   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654643   92071 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0315 23:10:52.654650   92071 round_trippers.go:469] Request Headers:
	I0315 23:10:52.654660   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:10:52.654665   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:10:52.670314   92071 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0315 23:10:52.671233   92071 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0315 23:10:52.671253   92071 round_trippers.go:469] Request Headers:
	I0315 23:10:52.671264   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:10:52.671270   92071 round_trippers.go:473]     Content-Type: application/json
	I0315 23:10:52.671279   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:10:52.674221   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:10:52.674379   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.674394   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.674674   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.674697   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.674697   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.676558   92071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0315 23:10:52.677804   92071 addons.go:505] duration metric: took 1.325325685s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0315 23:10:52.677848   92071 start.go:245] waiting for cluster config update ...
	I0315 23:10:52.677864   92071 start.go:254] writing updated cluster config ...
	I0315 23:10:52.679616   92071 out.go:177] 
	I0315 23:10:52.681249   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:10:52.681355   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:10:52.683252   92071 out.go:177] * Starting "ha-285481-m02" control-plane node in "ha-285481" cluster
	I0315 23:10:52.684485   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:10:52.684526   92071 cache.go:56] Caching tarball of preloaded images
	I0315 23:10:52.684625   92071 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:10:52.684637   92071 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:10:52.684724   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:10:52.684916   92071 start.go:360] acquireMachinesLock for ha-285481-m02: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:10:52.684969   92071 start.go:364] duration metric: took 32.696µs to acquireMachinesLock for "ha-285481-m02"
	I0315 23:10:52.684987   92071 start.go:93] Provisioning new machine with config: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:10:52.685053   92071 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0315 23:10:52.686516   92071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:10:52.686596   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:52.686637   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:52.701524   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0315 23:10:52.701941   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:52.702411   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:52.702430   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:52.702759   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:52.702971   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:10:52.703137   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:10:52.703282   92071 start.go:159] libmachine.API.Create for "ha-285481" (driver="kvm2")
	I0315 23:10:52.703328   92071 client.go:168] LocalClient.Create starting
	I0315 23:10:52.703370   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:10:52.703410   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:10:52.703431   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:10:52.703505   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:10:52.703539   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:10:52.703564   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:10:52.703589   92071 main.go:141] libmachine: Running pre-create checks...
	I0315 23:10:52.703602   92071 main.go:141] libmachine: (ha-285481-m02) Calling .PreCreateCheck
	I0315 23:10:52.703772   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetConfigRaw
	I0315 23:10:52.704283   92071 main.go:141] libmachine: Creating machine...
	I0315 23:10:52.704299   92071 main.go:141] libmachine: (ha-285481-m02) Calling .Create
	I0315 23:10:52.704443   92071 main.go:141] libmachine: (ha-285481-m02) Creating KVM machine...
	I0315 23:10:52.705644   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found existing default KVM network
	I0315 23:10:52.705762   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found existing private KVM network mk-ha-285481
	I0315 23:10:52.705929   92071 main.go:141] libmachine: (ha-285481-m02) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02 ...
	I0315 23:10:52.705958   92071 main.go:141] libmachine: (ha-285481-m02) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:10:52.705996   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:52.705898   92430 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:10:52.706138   92071 main.go:141] libmachine: (ha-285481-m02) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:10:52.950484   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:52.950355   92430 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa...
	I0315 23:10:53.146048   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:53.145865   92430 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/ha-285481-m02.rawdisk...
	I0315 23:10:53.146096   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Writing magic tar header
	I0315 23:10:53.146158   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Writing SSH key tar header
	I0315 23:10:53.146191   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:53.146011   92430 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02 ...
	I0315 23:10:53.146214   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02 (perms=drwx------)
	I0315 23:10:53.146232   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:10:53.146242   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:10:53.146258   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:10:53.146270   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:10:53.146281   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02
	I0315 23:10:53.146295   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:10:53.146305   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:10:53.146342   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:10:53.146384   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:10:53.146396   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:10:53.146408   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:10:53.146416   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home
	I0315 23:10:53.146444   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Skipping /home - not owner
	I0315 23:10:53.146460   92071 main.go:141] libmachine: (ha-285481-m02) Creating domain...
	I0315 23:10:53.147595   92071 main.go:141] libmachine: (ha-285481-m02) define libvirt domain using xml: 
	I0315 23:10:53.147613   92071 main.go:141] libmachine: (ha-285481-m02) <domain type='kvm'>
	I0315 23:10:53.147620   92071 main.go:141] libmachine: (ha-285481-m02)   <name>ha-285481-m02</name>
	I0315 23:10:53.147624   92071 main.go:141] libmachine: (ha-285481-m02)   <memory unit='MiB'>2200</memory>
	I0315 23:10:53.147654   92071 main.go:141] libmachine: (ha-285481-m02)   <vcpu>2</vcpu>
	I0315 23:10:53.147685   92071 main.go:141] libmachine: (ha-285481-m02)   <features>
	I0315 23:10:53.147699   92071 main.go:141] libmachine: (ha-285481-m02)     <acpi/>
	I0315 23:10:53.147711   92071 main.go:141] libmachine: (ha-285481-m02)     <apic/>
	I0315 23:10:53.147720   92071 main.go:141] libmachine: (ha-285481-m02)     <pae/>
	I0315 23:10:53.147733   92071 main.go:141] libmachine: (ha-285481-m02)     
	I0315 23:10:53.147743   92071 main.go:141] libmachine: (ha-285481-m02)   </features>
	I0315 23:10:53.147756   92071 main.go:141] libmachine: (ha-285481-m02)   <cpu mode='host-passthrough'>
	I0315 23:10:53.147768   92071 main.go:141] libmachine: (ha-285481-m02)   
	I0315 23:10:53.147783   92071 main.go:141] libmachine: (ha-285481-m02)   </cpu>
	I0315 23:10:53.147796   92071 main.go:141] libmachine: (ha-285481-m02)   <os>
	I0315 23:10:53.147808   92071 main.go:141] libmachine: (ha-285481-m02)     <type>hvm</type>
	I0315 23:10:53.147821   92071 main.go:141] libmachine: (ha-285481-m02)     <boot dev='cdrom'/>
	I0315 23:10:53.147832   92071 main.go:141] libmachine: (ha-285481-m02)     <boot dev='hd'/>
	I0315 23:10:53.147842   92071 main.go:141] libmachine: (ha-285481-m02)     <bootmenu enable='no'/>
	I0315 23:10:53.147853   92071 main.go:141] libmachine: (ha-285481-m02)   </os>
	I0315 23:10:53.147882   92071 main.go:141] libmachine: (ha-285481-m02)   <devices>
	I0315 23:10:53.147910   92071 main.go:141] libmachine: (ha-285481-m02)     <disk type='file' device='cdrom'>
	I0315 23:10:53.147935   92071 main.go:141] libmachine: (ha-285481-m02)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/boot2docker.iso'/>
	I0315 23:10:53.147954   92071 main.go:141] libmachine: (ha-285481-m02)       <target dev='hdc' bus='scsi'/>
	I0315 23:10:53.147967   92071 main.go:141] libmachine: (ha-285481-m02)       <readonly/>
	I0315 23:10:53.147973   92071 main.go:141] libmachine: (ha-285481-m02)     </disk>
	I0315 23:10:53.147984   92071 main.go:141] libmachine: (ha-285481-m02)     <disk type='file' device='disk'>
	I0315 23:10:53.147993   92071 main.go:141] libmachine: (ha-285481-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:10:53.148008   92071 main.go:141] libmachine: (ha-285481-m02)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/ha-285481-m02.rawdisk'/>
	I0315 23:10:53.148018   92071 main.go:141] libmachine: (ha-285481-m02)       <target dev='hda' bus='virtio'/>
	I0315 23:10:53.148034   92071 main.go:141] libmachine: (ha-285481-m02)     </disk>
	I0315 23:10:53.148052   92071 main.go:141] libmachine: (ha-285481-m02)     <interface type='network'>
	I0315 23:10:53.148063   92071 main.go:141] libmachine: (ha-285481-m02)       <source network='mk-ha-285481'/>
	I0315 23:10:53.148075   92071 main.go:141] libmachine: (ha-285481-m02)       <model type='virtio'/>
	I0315 23:10:53.148087   92071 main.go:141] libmachine: (ha-285481-m02)     </interface>
	I0315 23:10:53.148099   92071 main.go:141] libmachine: (ha-285481-m02)     <interface type='network'>
	I0315 23:10:53.148113   92071 main.go:141] libmachine: (ha-285481-m02)       <source network='default'/>
	I0315 23:10:53.148129   92071 main.go:141] libmachine: (ha-285481-m02)       <model type='virtio'/>
	I0315 23:10:53.148142   92071 main.go:141] libmachine: (ha-285481-m02)     </interface>
	I0315 23:10:53.148151   92071 main.go:141] libmachine: (ha-285481-m02)     <serial type='pty'>
	I0315 23:10:53.148163   92071 main.go:141] libmachine: (ha-285481-m02)       <target port='0'/>
	I0315 23:10:53.148180   92071 main.go:141] libmachine: (ha-285481-m02)     </serial>
	I0315 23:10:53.148193   92071 main.go:141] libmachine: (ha-285481-m02)     <console type='pty'>
	I0315 23:10:53.148209   92071 main.go:141] libmachine: (ha-285481-m02)       <target type='serial' port='0'/>
	I0315 23:10:53.148222   92071 main.go:141] libmachine: (ha-285481-m02)     </console>
	I0315 23:10:53.148232   92071 main.go:141] libmachine: (ha-285481-m02)     <rng model='virtio'>
	I0315 23:10:53.148245   92071 main.go:141] libmachine: (ha-285481-m02)       <backend model='random'>/dev/random</backend>
	I0315 23:10:53.148256   92071 main.go:141] libmachine: (ha-285481-m02)     </rng>
	I0315 23:10:53.148266   92071 main.go:141] libmachine: (ha-285481-m02)     
	I0315 23:10:53.148280   92071 main.go:141] libmachine: (ha-285481-m02)     
	I0315 23:10:53.148293   92071 main.go:141] libmachine: (ha-285481-m02)   </devices>
	I0315 23:10:53.148304   92071 main.go:141] libmachine: (ha-285481-m02) </domain>
	I0315 23:10:53.148317   92071 main.go:141] libmachine: (ha-285481-m02) 
	I0315 23:10:53.156035   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3b:93:b0 in network default
	I0315 23:10:53.156657   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:53.156676   92071 main.go:141] libmachine: (ha-285481-m02) Ensuring networks are active...
	I0315 23:10:53.157473   92071 main.go:141] libmachine: (ha-285481-m02) Ensuring network default is active
	I0315 23:10:53.157847   92071 main.go:141] libmachine: (ha-285481-m02) Ensuring network mk-ha-285481 is active
	I0315 23:10:53.158188   92071 main.go:141] libmachine: (ha-285481-m02) Getting domain xml...
	I0315 23:10:53.158864   92071 main.go:141] libmachine: (ha-285481-m02) Creating domain...
	I0315 23:10:54.395963   92071 main.go:141] libmachine: (ha-285481-m02) Waiting to get IP...
	I0315 23:10:54.396746   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:54.397079   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:54.397110   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:54.397050   92430 retry.go:31] will retry after 252.967197ms: waiting for machine to come up
	I0315 23:10:54.651653   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:54.652024   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:54.652088   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:54.652003   92430 retry.go:31] will retry after 344.44741ms: waiting for machine to come up
	I0315 23:10:54.998750   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:54.999219   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:54.999253   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:54.999165   92430 retry.go:31] will retry after 389.245503ms: waiting for machine to come up
	I0315 23:10:55.389615   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:55.390116   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:55.390154   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:55.390063   92430 retry.go:31] will retry after 474.725516ms: waiting for machine to come up
	I0315 23:10:55.866614   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:55.867053   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:55.867089   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:55.867015   92430 retry.go:31] will retry after 576.819343ms: waiting for machine to come up
	I0315 23:10:56.445568   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:56.445991   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:56.446020   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:56.445928   92430 retry.go:31] will retry after 718.21589ms: waiting for machine to come up
	I0315 23:10:57.165796   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:57.166182   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:57.166212   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:57.166131   92430 retry.go:31] will retry after 1.005197331s: waiting for machine to come up
	I0315 23:10:58.173365   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:58.173972   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:58.174003   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:58.173918   92430 retry.go:31] will retry after 1.327098151s: waiting for machine to come up
	I0315 23:10:59.503386   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:59.503852   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:59.503876   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:59.503797   92430 retry.go:31] will retry after 1.270117038s: waiting for machine to come up
	I0315 23:11:00.776260   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:00.776734   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:00.776763   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:00.776676   92430 retry.go:31] will retry after 2.054242619s: waiting for machine to come up
	I0315 23:11:02.832772   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:02.833308   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:02.833337   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:02.833260   92430 retry.go:31] will retry after 2.37826086s: waiting for machine to come up
	I0315 23:11:05.214828   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:05.215339   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:05.215376   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:05.215266   92430 retry.go:31] will retry after 3.507325443s: waiting for machine to come up
	I0315 23:11:08.723867   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:08.724264   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:08.724292   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:08.724207   92430 retry.go:31] will retry after 2.857890161s: waiting for machine to come up
	I0315 23:11:11.585086   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:11.585402   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:11.585433   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:11.585372   92430 retry.go:31] will retry after 4.808833362s: waiting for machine to come up
	I0315 23:11:16.398917   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.399364   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has current primary IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.399391   92071 main.go:141] libmachine: (ha-285481-m02) Found IP for machine: 192.168.39.201
	I0315 23:11:16.399401   92071 main.go:141] libmachine: (ha-285481-m02) Reserving static IP address...
	I0315 23:11:16.399851   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find host DHCP lease matching {name: "ha-285481-m02", mac: "52:54:00:3a:fc:bf", ip: "192.168.39.201"} in network mk-ha-285481
	I0315 23:11:16.473894   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Getting to WaitForSSH function...
	I0315 23:11:16.473922   92071 main.go:141] libmachine: (ha-285481-m02) Reserved static IP address: 192.168.39.201
	I0315 23:11:16.473935   92071 main.go:141] libmachine: (ha-285481-m02) Waiting for SSH to be available...
	I0315 23:11:16.476347   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.476799   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.476823   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.476951   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Using SSH client type: external
	I0315 23:11:16.476980   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa (-rw-------)
	I0315 23:11:16.477097   92071 main.go:141] libmachine: (ha-285481-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 23:11:16.477125   92071 main.go:141] libmachine: (ha-285481-m02) DBG | About to run SSH command:
	I0315 23:11:16.477145   92071 main.go:141] libmachine: (ha-285481-m02) DBG | exit 0
	I0315 23:11:16.603541   92071 main.go:141] libmachine: (ha-285481-m02) DBG | SSH cmd err, output: <nil>: 
	I0315 23:11:16.603856   92071 main.go:141] libmachine: (ha-285481-m02) KVM machine creation complete!
	I0315 23:11:16.604122   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetConfigRaw
	I0315 23:11:16.604730   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:16.604917   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:16.605106   92071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 23:11:16.605123   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:11:16.606380   92071 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 23:11:16.606395   92071 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 23:11:16.606403   92071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 23:11:16.606411   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.608618   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.608975   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.609014   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.609134   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.609319   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.609481   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.609667   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.609835   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.610134   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.610154   92071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 23:11:16.714937   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:11:16.714971   92071 main.go:141] libmachine: Detecting the provisioner...
	I0315 23:11:16.714981   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.717751   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.718134   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.718154   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.718369   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.718590   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.718800   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.718941   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.719155   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.719422   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.719441   92071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 23:11:16.828443   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 23:11:16.828552   92071 main.go:141] libmachine: found compatible host: buildroot
	I0315 23:11:16.828569   92071 main.go:141] libmachine: Provisioning with buildroot...
	I0315 23:11:16.828581   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:11:16.828879   92071 buildroot.go:166] provisioning hostname "ha-285481-m02"
	I0315 23:11:16.828913   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:11:16.829091   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.832030   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.832496   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.832530   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.832666   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.832881   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.833079   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.833302   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.833478   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.833689   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.833707   92071 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481-m02 && echo "ha-285481-m02" | sudo tee /etc/hostname
	I0315 23:11:16.955677   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481-m02
	
	I0315 23:11:16.955702   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.958465   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.958831   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.958860   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.958998   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.959187   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.959308   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.959444   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.959565   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.959779   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.959801   92071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:11:17.080778   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:11:17.080820   92071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:11:17.080843   92071 buildroot.go:174] setting up certificates
	I0315 23:11:17.080855   92071 provision.go:84] configureAuth start
	I0315 23:11:17.080864   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:11:17.081196   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:17.083944   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.084264   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.084291   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.084433   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.086582   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.086977   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.087000   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.087145   92071 provision.go:143] copyHostCerts
	I0315 23:11:17.087175   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:11:17.087222   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:11:17.087232   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:11:17.087298   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:11:17.087394   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:11:17.087415   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:11:17.087420   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:11:17.087451   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:11:17.087503   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:11:17.087522   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:11:17.087525   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:11:17.087544   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:11:17.087593   92071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481-m02 san=[127.0.0.1 192.168.39.201 ha-285481-m02 localhost minikube]
	I0315 23:11:17.280830   92071 provision.go:177] copyRemoteCerts
	I0315 23:11:17.280889   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:11:17.280913   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.283506   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.283820   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.283841   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.284079   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.284304   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.284457   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.284593   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:17.370618   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:11:17.370737   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:11:17.395790   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:11:17.395871   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 23:11:17.421313   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:11:17.421397   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:11:17.446164   92071 provision.go:87] duration metric: took 365.293267ms to configureAuth
	I0315 23:11:17.446197   92071 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:11:17.446430   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:11:17.446532   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.449285   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.449590   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.449615   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.449830   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.450008   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.450220   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.450390   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.450557   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:17.450785   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:17.450806   92071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:11:17.749612   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:11:17.749640   92071 main.go:141] libmachine: Checking connection to Docker...
	I0315 23:11:17.749648   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetURL
	I0315 23:11:17.751024   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Using libvirt version 6000000
	I0315 23:11:17.753064   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.753432   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.753459   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.753625   92071 main.go:141] libmachine: Docker is up and running!
	I0315 23:11:17.753635   92071 main.go:141] libmachine: Reticulating splines...
	I0315 23:11:17.753643   92071 client.go:171] duration metric: took 25.050302241s to LocalClient.Create
	I0315 23:11:17.753673   92071 start.go:167] duration metric: took 25.050395782s to libmachine.API.Create "ha-285481"
	I0315 23:11:17.753684   92071 start.go:293] postStartSetup for "ha-285481-m02" (driver="kvm2")
	I0315 23:11:17.753695   92071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:11:17.753712   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:17.753944   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:11:17.753972   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.756226   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.756613   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.756642   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.756786   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.756981   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.757162   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.757304   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:17.842063   92071 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:11:17.846629   92071 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:11:17.846661   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:11:17.846728   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:11:17.846829   92071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:11:17.846845   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:11:17.846956   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:11:17.856680   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:11:17.881785   92071 start.go:296] duration metric: took 128.084575ms for postStartSetup
	I0315 23:11:17.881854   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetConfigRaw
	I0315 23:11:17.882547   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:17.885243   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.885665   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.885692   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.885952   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:11:17.886166   92071 start.go:128] duration metric: took 25.201095975s to createHost
	I0315 23:11:17.886194   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.888268   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.888556   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.888602   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.888677   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.888866   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.889031   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.889154   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.889324   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:17.889533   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:17.889547   92071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:11:17.996267   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544277.968770109
	
	I0315 23:11:17.996296   92071 fix.go:216] guest clock: 1710544277.968770109
	I0315 23:11:17.996306   92071 fix.go:229] Guest: 2024-03-15 23:11:17.968770109 +0000 UTC Remote: 2024-03-15 23:11:17.886181477 +0000 UTC m=+82.104509591 (delta=82.588632ms)
	I0315 23:11:17.996327   92071 fix.go:200] guest clock delta is within tolerance: 82.588632ms
	I0315 23:11:17.996333   92071 start.go:83] releasing machines lock for "ha-285481-m02", held for 25.311355257s
	I0315 23:11:17.996358   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:17.996698   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:17.999358   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.999729   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.999766   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.002384   92071 out.go:177] * Found network options:
	I0315 23:11:18.004023   92071 out.go:177]   - NO_PROXY=192.168.39.23
	W0315 23:11:18.005372   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:11:18.005420   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:18.005999   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:18.006203   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:18.006325   92071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:11:18.006365   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	W0315 23:11:18.006366   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:11:18.006435   92071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:11:18.006457   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:18.009221   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009430   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009607   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:18.009634   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009779   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:18.009800   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009805   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:18.009980   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:18.009980   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:18.010182   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:18.010198   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:18.010343   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:18.010377   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:18.010497   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:18.257359   92071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:11:18.264374   92071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:11:18.264477   92071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:11:18.281573   92071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 23:11:18.281608   92071 start.go:494] detecting cgroup driver to use...
	I0315 23:11:18.281676   92071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:11:18.303233   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:11:18.319295   92071 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:11:18.319372   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:11:18.335486   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:11:18.351237   92071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:11:18.467012   92071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:11:18.637365   92071 docker.go:233] disabling docker service ...
	I0315 23:11:18.637443   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:11:18.653001   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:11:18.667273   92071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:11:18.792614   92071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:11:18.913797   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:11:18.928846   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:11:18.948395   92071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:11:18.948474   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.960287   92071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:11:18.960376   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.972092   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.983557   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.995153   92071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:11:19.008066   92071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:11:19.018665   92071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 23:11:19.018736   92071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 23:11:19.033254   92071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:11:19.044086   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:11:19.166681   92071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:11:19.329472   92071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:11:19.329539   92071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:11:19.334805   92071 start.go:562] Will wait 60s for crictl version
	I0315 23:11:19.334851   92071 ssh_runner.go:195] Run: which crictl
	I0315 23:11:19.338846   92071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:11:19.380782   92071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:11:19.380874   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:11:19.409303   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:11:19.439910   92071 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:11:19.441369   92071 out.go:177]   - env NO_PROXY=192.168.39.23
	I0315 23:11:19.442697   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:19.445455   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:19.445796   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:19.445823   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:19.446082   92071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:11:19.450598   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:11:19.463286   92071 mustload.go:65] Loading cluster: ha-285481
	I0315 23:11:19.463491   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:11:19.463787   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:11:19.463836   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:11:19.478653   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38171
	I0315 23:11:19.479071   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:11:19.479544   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:11:19.479564   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:11:19.479842   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:11:19.480019   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:11:19.481450   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:11:19.481740   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:11:19.481781   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:11:19.495824   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0315 23:11:19.496342   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:11:19.496842   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:11:19.496872   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:11:19.497152   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:11:19.497332   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:11:19.497475   92071 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.201
	I0315 23:11:19.497485   92071 certs.go:194] generating shared ca certs ...
	I0315 23:11:19.497499   92071 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:11:19.497633   92071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:11:19.497677   92071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:11:19.497687   92071 certs.go:256] generating profile certs ...
	I0315 23:11:19.497794   92071 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:11:19.497820   92071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027
	I0315 23:11:19.497836   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.201 192.168.39.254]
	I0315 23:11:19.620686   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027 ...
	I0315 23:11:19.620718   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027: {Name:mk85afc9afc0cec0ea2b0d31c760805aa2a86c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:11:19.620908   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027 ...
	I0315 23:11:19.620926   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027: {Name:mke92e5c9595faada63f5a098b96c1719f9a5cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:11:19.621026   92071 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:11:19.621166   92071 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:11:19.621294   92071 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:11:19.621311   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:11:19.621324   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:11:19.621337   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:11:19.621348   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:11:19.621358   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:11:19.621368   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:11:19.621378   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:11:19.621388   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:11:19.621434   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:11:19.621462   92071 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:11:19.621472   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:11:19.621492   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:11:19.621512   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:11:19.621534   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:11:19.621572   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:11:19.621596   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:11:19.621610   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:19.621621   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:11:19.621654   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:11:19.624944   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:19.625520   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:11:19.625550   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:19.625780   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:11:19.625982   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:11:19.626150   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:11:19.626305   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:11:19.703773   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 23:11:19.708688   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 23:11:19.720860   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 23:11:19.724985   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 23:11:19.736231   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 23:11:19.740487   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 23:11:19.751401   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 23:11:19.755603   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0315 23:11:19.766660   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 23:11:19.770831   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 23:11:19.781384   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 23:11:19.785422   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0315 23:11:19.796815   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:11:19.825782   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:11:19.853084   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:11:19.882045   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:11:19.911348   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0315 23:11:19.938702   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:11:19.967987   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:11:19.993997   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:11:20.021482   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:11:20.048102   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:11:20.073693   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:11:20.099403   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 23:11:20.118548   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 23:11:20.136388   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 23:11:20.154057   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0315 23:11:20.171430   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 23:11:20.189279   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0315 23:11:20.207516   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 23:11:20.225059   92071 ssh_runner.go:195] Run: openssl version
	I0315 23:11:20.231114   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:11:20.242419   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:20.247132   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:20.247183   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:20.252977   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:11:20.263906   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:11:20.275213   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:11:20.280147   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:11:20.280236   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:11:20.286093   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:11:20.297326   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:11:20.308373   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:11:20.313081   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:11:20.313161   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:11:20.319170   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:11:20.330637   92071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:11:20.335223   92071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 23:11:20.335273   92071 kubeadm.go:928] updating node {m02 192.168.39.201 8443 v1.28.4 crio true true} ...
	I0315 23:11:20.335396   92071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:11:20.335430   92071 kube-vip.go:111] generating kube-vip config ...
	I0315 23:11:20.335468   92071 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:11:20.353067   92071 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:11:20.353156   92071 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:11:20.353217   92071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:11:20.364295   92071 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 23:11:20.364366   92071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 23:11:20.375447   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 23:11:20.375458   92071 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0315 23:11:20.375479   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:11:20.375491   92071 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0315 23:11:20.375554   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:11:20.380055   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 23:11:20.380092   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 23:11:21.023047   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:11:21.023129   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:11:21.030025   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 23:11:21.030055   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 23:11:21.506672   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:11:21.521518   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:11:21.521633   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:11:21.526227   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 23:11:21.526271   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 23:11:22.004288   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 23:11:22.014822   92071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0315 23:11:22.032874   92071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:11:22.050860   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:11:22.068725   92071 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:11:22.073004   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:11:22.087536   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:11:22.215082   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:11:22.233280   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:11:22.233700   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:11:22.233747   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:11:22.248515   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0315 23:11:22.249034   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:11:22.249547   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:11:22.249569   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:11:22.249914   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:11:22.250127   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:11:22.250277   92071 start.go:316] joinCluster: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:11:22.250391   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 23:11:22.250417   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:11:22.253285   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:22.253726   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:11:22.253753   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:22.253883   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:11:22.254070   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:11:22.254241   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:11:22.254385   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:11:22.436047   92071 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:11:22.436101   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dp4yf.bvb9yd6ppxvzjirg --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m02 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443"
	I0315 23:12:03.809527   92071 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dp4yf.bvb9yd6ppxvzjirg --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m02 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443": (41.37339146s)
	I0315 23:12:03.809567   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 23:12:04.174017   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-285481-m02 minikube.k8s.io/updated_at=2024_03_15T23_12_04_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=ha-285481 minikube.k8s.io/primary=false
	I0315 23:12:04.310914   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-285481-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 23:12:04.449321   92071 start.go:318] duration metric: took 42.19903763s to joinCluster
	I0315 23:12:04.449408   92071 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:12:04.450702   92071 out.go:177] * Verifying Kubernetes components...
	I0315 23:12:04.449668   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:04.451848   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:12:04.649867   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:12:04.665221   92071 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:12:04.665503   92071 kapi.go:59] client config for ha-285481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt", KeyFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key", CAFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 23:12:04.665568   92071 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.23:8443
	I0315 23:12:04.665827   92071 node_ready.go:35] waiting up to 6m0s for node "ha-285481-m02" to be "Ready" ...
	I0315 23:12:04.665942   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:04.665953   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:04.665964   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:04.665970   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:04.675576   92071 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0315 23:12:05.166643   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:05.166666   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:05.166674   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:05.166678   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:05.169879   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:05.666721   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:05.666747   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:05.666759   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:05.666764   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:05.670743   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:06.166562   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:06.166596   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:06.166607   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:06.166612   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:06.170108   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:06.666130   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:06.666157   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:06.666170   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:06.666175   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:06.675669   92071 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0315 23:12:06.676389   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:07.166719   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:07.166739   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:07.166747   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:07.166751   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:07.170970   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:07.666691   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:07.666715   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:07.666722   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:07.666727   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:07.671505   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:08.166907   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:08.166935   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:08.166947   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:08.166952   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:08.171388   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:08.666107   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:08.666131   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:08.666141   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:08.666144   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:08.670576   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:09.166735   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:09.166762   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:09.166770   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:09.166775   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:09.170847   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:09.171386   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:09.666850   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:09.666874   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:09.666882   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:09.666886   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:09.670926   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:10.166964   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:10.166986   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:10.166994   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:10.166998   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:10.170606   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:10.666425   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:10.666449   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:10.666457   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:10.666460   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:10.671198   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:11.167034   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:11.167058   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:11.167065   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:11.167069   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:11.170974   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:11.171640   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:11.666253   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:11.666281   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:11.666293   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:11.666302   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:11.670535   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:12.166855   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:12.166883   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:12.166895   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:12.166900   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:12.170570   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:12.666128   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:12.666150   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:12.666158   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:12.666165   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:12.670101   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:13.166829   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:13.166854   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:13.166868   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:13.166873   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:13.172325   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:12:13.173011   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:13.666393   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:13.666426   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:13.666436   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:13.666443   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:13.670340   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.166651   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.166695   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.166706   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.166710   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.172556   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:12:14.174054   92071 node_ready.go:49] node "ha-285481-m02" has status "Ready":"True"
	I0315 23:12:14.174073   92071 node_ready.go:38] duration metric: took 9.508228506s for node "ha-285481-m02" to be "Ready" ...
	I0315 23:12:14.174083   92071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:12:14.174169   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:14.174178   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.174185   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.174189   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.180352   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:12:14.186761   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.186876   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9c44k
	I0315 23:12:14.186887   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.186894   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.186900   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.190517   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.191402   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.191417   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.191425   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.191430   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.194391   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.195024   92071 pod_ready.go:92] pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.195047   92071 pod_ready.go:81] duration metric: took 8.253041ms for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.195059   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.195130   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qxtp4
	I0315 23:12:14.195139   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.195145   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.195149   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.197852   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.198531   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.198546   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.198557   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.198561   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.201010   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.201670   92071 pod_ready.go:92] pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.201689   92071 pod_ready.go:81] duration metric: took 6.618034ms for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.201697   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.201747   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481
	I0315 23:12:14.201754   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.201761   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.201769   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.204434   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.205136   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.205153   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.205161   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.205166   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.211147   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:12:14.211715   92071 pod_ready.go:92] pod "etcd-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.211740   92071 pod_ready.go:81] duration metric: took 10.032825ms for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.211753   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.211821   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m02
	I0315 23:12:14.211832   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.211841   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.211846   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.215218   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.215824   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.215842   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.215854   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.215863   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.219968   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:14.220791   92071 pod_ready.go:92] pod "etcd-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.220808   92071 pod_ready.go:81] duration metric: took 9.041234ms for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.220822   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.367255   92071 request.go:629] Waited for 146.342872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481
	I0315 23:12:14.367373   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481
	I0315 23:12:14.367384   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.367393   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.367400   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.371391   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.566974   92071 request.go:629] Waited for 194.834112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.567030   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.567035   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.567043   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.567048   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.570570   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.571196   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.571218   92071 pod_ready.go:81] duration metric: took 350.387909ms for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.571230   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.767188   92071 request.go:629] Waited for 195.878941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m02
	I0315 23:12:14.767287   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m02
	I0315 23:12:14.767297   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.767307   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.767338   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.771442   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:14.966769   92071 request.go:629] Waited for 194.281826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.966840   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.966848   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.966859   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.966870   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.970756   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.971339   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.971363   92071 pod_ready.go:81] duration metric: took 400.122734ms for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.971390   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.167483   92071 request.go:629] Waited for 196.015214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481
	I0315 23:12:15.167544   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481
	I0315 23:12:15.167549   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.167570   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.167574   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.171380   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:15.367395   92071 request.go:629] Waited for 195.406578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:15.367484   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:15.367497   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.367509   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.367515   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.371862   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:15.372529   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:15.372554   92071 pod_ready.go:81] duration metric: took 401.156298ms for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.372568   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.567597   92071 request.go:629] Waited for 194.940851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:12:15.567675   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:12:15.567683   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.567691   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.567698   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.571236   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:15.767288   92071 request.go:629] Waited for 195.385555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:15.767367   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:15.767376   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.767385   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.767391   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.771308   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:15.771765   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:15.771788   92071 pod_ready.go:81] duration metric: took 399.209045ms for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.771798   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.966700   92071 request.go:629] Waited for 194.821913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:12:15.966778   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:12:15.966787   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.966798   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.966806   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.971003   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.167303   92071 request.go:629] Waited for 195.440131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:16.167398   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:16.167406   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.167414   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.167421   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.170798   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:16.171380   92071 pod_ready.go:92] pod "kube-proxy-2hcgt" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:16.171399   92071 pod_ready.go:81] duration metric: took 399.595276ms for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.171409   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.367474   92071 request.go:629] Waited for 195.988442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:12:16.367558   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:12:16.367564   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.367572   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.367578   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.372027   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.567019   92071 request.go:629] Waited for 194.38252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.567078   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.567083   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.567091   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.567094   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.571363   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.572462   92071 pod_ready.go:92] pod "kube-proxy-cml9m" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:16.572486   92071 pod_ready.go:81] duration metric: took 401.069342ms for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.572498   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.767682   92071 request.go:629] Waited for 195.091788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:12:16.767759   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:12:16.767773   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.767785   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.767793   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.772564   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.966997   92071 request.go:629] Waited for 193.388496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.967099   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.967131   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.967142   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.967147   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.971452   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.972250   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:16.972271   92071 pod_ready.go:81] duration metric: took 399.764452ms for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.972293   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:17.167453   92071 request.go:629] Waited for 195.048432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:12:17.167534   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:12:17.167544   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.167552   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.167558   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.171521   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:17.367634   92071 request.go:629] Waited for 195.462259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:17.367717   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:17.367722   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.367731   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.367735   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.372166   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:17.372789   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:17.372809   92071 pod_ready.go:81] duration metric: took 400.508205ms for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:17.372819   92071 pod_ready.go:38] duration metric: took 3.198702211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:12:17.372838   92071 api_server.go:52] waiting for apiserver process to appear ...
	I0315 23:12:17.372911   92071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:12:17.389626   92071 api_server.go:72] duration metric: took 12.940172719s to wait for apiserver process to appear ...
	I0315 23:12:17.389657   92071 api_server.go:88] waiting for apiserver healthz status ...
	I0315 23:12:17.389693   92071 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I0315 23:12:17.395595   92071 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I0315 23:12:17.395688   92071 round_trippers.go:463] GET https://192.168.39.23:8443/version
	I0315 23:12:17.395700   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.395711   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.395720   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.396958   92071 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0315 23:12:17.397057   92071 api_server.go:141] control plane version: v1.28.4
	I0315 23:12:17.397084   92071 api_server.go:131] duration metric: took 7.413304ms to wait for apiserver health ...
	I0315 23:12:17.397098   92071 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 23:12:17.567501   92071 request.go:629] Waited for 170.339953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.567587   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.567595   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.567608   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.567617   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.573734   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:12:17.578383   92071 system_pods.go:59] 17 kube-system pods found
	I0315 23:12:17.578411   92071 system_pods.go:61] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:12:17.578420   92071 system_pods.go:61] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:12:17.578424   92071 system_pods.go:61] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:12:17.578427   92071 system_pods.go:61] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:12:17.578430   92071 system_pods.go:61] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:12:17.578434   92071 system_pods.go:61] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:12:17.578437   92071 system_pods.go:61] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:12:17.578440   92071 system_pods.go:61] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:12:17.578444   92071 system_pods.go:61] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:12:17.578447   92071 system_pods.go:61] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:12:17.578450   92071 system_pods.go:61] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:12:17.578453   92071 system_pods.go:61] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:12:17.578456   92071 system_pods.go:61] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:12:17.578462   92071 system_pods.go:61] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:12:17.578465   92071 system_pods.go:61] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:12:17.578467   92071 system_pods.go:61] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:12:17.578470   92071 system_pods.go:61] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:12:17.578475   92071 system_pods.go:74] duration metric: took 181.371699ms to wait for pod list to return data ...
	I0315 23:12:17.578482   92071 default_sa.go:34] waiting for default service account to be created ...
	I0315 23:12:17.767033   92071 request.go:629] Waited for 188.478323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:12:17.767114   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:12:17.767120   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.767128   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.767134   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.771003   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:17.771244   92071 default_sa.go:45] found service account: "default"
	I0315 23:12:17.771265   92071 default_sa.go:55] duration metric: took 192.776688ms for default service account to be created ...
	I0315 23:12:17.771274   92071 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 23:12:17.967530   92071 request.go:629] Waited for 196.177413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.967630   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.967638   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.967649   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.967657   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.973878   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:12:17.978474   92071 system_pods.go:86] 17 kube-system pods found
	I0315 23:12:17.978503   92071 system_pods.go:89] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:12:17.978508   92071 system_pods.go:89] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:12:17.978512   92071 system_pods.go:89] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:12:17.978517   92071 system_pods.go:89] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:12:17.978520   92071 system_pods.go:89] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:12:17.978525   92071 system_pods.go:89] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:12:17.978528   92071 system_pods.go:89] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:12:17.978532   92071 system_pods.go:89] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:12:17.978536   92071 system_pods.go:89] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:12:17.978540   92071 system_pods.go:89] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:12:17.978543   92071 system_pods.go:89] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:12:17.978547   92071 system_pods.go:89] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:12:17.978550   92071 system_pods.go:89] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:12:17.978554   92071 system_pods.go:89] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:12:17.978557   92071 system_pods.go:89] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:12:17.978562   92071 system_pods.go:89] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:12:17.978572   92071 system_pods.go:89] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:12:17.978581   92071 system_pods.go:126] duration metric: took 207.300967ms to wait for k8s-apps to be running ...
	I0315 23:12:17.978596   92071 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 23:12:17.978668   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:12:17.999786   92071 system_svc.go:56] duration metric: took 21.178532ms WaitForService to wait for kubelet
	I0315 23:12:17.999824   92071 kubeadm.go:576] duration metric: took 13.550375462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:12:17.999850   92071 node_conditions.go:102] verifying NodePressure condition ...
	I0315 23:12:18.167354   92071 request.go:629] Waited for 167.370598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes
	I0315 23:12:18.167423   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes
	I0315 23:12:18.167429   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:18.167437   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:18.167465   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:18.171040   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:18.171798   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:12:18.171835   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:12:18.171847   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:12:18.171851   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:12:18.171855   92071 node_conditions.go:105] duration metric: took 172.000134ms to run NodePressure ...
	I0315 23:12:18.171866   92071 start.go:240] waiting for startup goroutines ...
	I0315 23:12:18.171895   92071 start.go:254] writing updated cluster config ...
	I0315 23:12:18.174181   92071 out.go:177] 
	I0315 23:12:18.175893   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:18.175991   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:12:18.177966   92071 out.go:177] * Starting "ha-285481-m03" control-plane node in "ha-285481" cluster
	I0315 23:12:18.179302   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:12:18.179347   92071 cache.go:56] Caching tarball of preloaded images
	I0315 23:12:18.179455   92071 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:12:18.179468   92071 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:12:18.179573   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:12:18.179753   92071 start.go:360] acquireMachinesLock for ha-285481-m03: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:12:18.179794   92071 start.go:364] duration metric: took 22.965µs to acquireMachinesLock for "ha-285481-m03"
	I0315 23:12:18.179809   92071 start.go:93] Provisioning new machine with config: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:12:18.179909   92071 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0315 23:12:18.181483   92071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:12:18.181569   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:18.181610   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:18.196579   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0315 23:12:18.197065   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:18.197487   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:18.197505   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:18.197809   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:18.198018   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:18.198162   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:18.198324   92071 start.go:159] libmachine.API.Create for "ha-285481" (driver="kvm2")
	I0315 23:12:18.198351   92071 client.go:168] LocalClient.Create starting
	I0315 23:12:18.198387   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:12:18.198425   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:12:18.198443   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:12:18.198520   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:12:18.198552   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:12:18.198569   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:12:18.198602   92071 main.go:141] libmachine: Running pre-create checks...
	I0315 23:12:18.198611   92071 main.go:141] libmachine: (ha-285481-m03) Calling .PreCreateCheck
	I0315 23:12:18.198777   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetConfigRaw
	I0315 23:12:18.199124   92071 main.go:141] libmachine: Creating machine...
	I0315 23:12:18.199139   92071 main.go:141] libmachine: (ha-285481-m03) Calling .Create
	I0315 23:12:18.199244   92071 main.go:141] libmachine: (ha-285481-m03) Creating KVM machine...
	I0315 23:12:18.200608   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found existing default KVM network
	I0315 23:12:18.200730   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found existing private KVM network mk-ha-285481
	I0315 23:12:18.200842   92071 main.go:141] libmachine: (ha-285481-m03) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03 ...
	I0315 23:12:18.200863   92071 main.go:141] libmachine: (ha-285481-m03) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:12:18.200933   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.200832   92758 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:12:18.201063   92071 main.go:141] libmachine: (ha-285481-m03) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:12:18.433977   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.433846   92758 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa...
	I0315 23:12:18.667560   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.667404   92758 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/ha-285481-m03.rawdisk...
	I0315 23:12:18.667595   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Writing magic tar header
	I0315 23:12:18.667614   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Writing SSH key tar header
	I0315 23:12:18.667658   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.667518   92758 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03 ...
	I0315 23:12:18.667702   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03 (perms=drwx------)
	I0315 23:12:18.667736   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:12:18.667750   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03
	I0315 23:12:18.667771   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:12:18.667786   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:12:18.667802   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:12:18.667815   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:12:18.667830   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:12:18.667846   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:12:18.667861   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:12:18.667877   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:12:18.667888   92071 main.go:141] libmachine: (ha-285481-m03) Creating domain...
	I0315 23:12:18.667898   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:12:18.667913   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home
	I0315 23:12:18.667925   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Skipping /home - not owner
	I0315 23:12:18.668824   92071 main.go:141] libmachine: (ha-285481-m03) define libvirt domain using xml: 
	I0315 23:12:18.668838   92071 main.go:141] libmachine: (ha-285481-m03) <domain type='kvm'>
	I0315 23:12:18.668847   92071 main.go:141] libmachine: (ha-285481-m03)   <name>ha-285481-m03</name>
	I0315 23:12:18.668855   92071 main.go:141] libmachine: (ha-285481-m03)   <memory unit='MiB'>2200</memory>
	I0315 23:12:18.668864   92071 main.go:141] libmachine: (ha-285481-m03)   <vcpu>2</vcpu>
	I0315 23:12:18.668876   92071 main.go:141] libmachine: (ha-285481-m03)   <features>
	I0315 23:12:18.668888   92071 main.go:141] libmachine: (ha-285481-m03)     <acpi/>
	I0315 23:12:18.668899   92071 main.go:141] libmachine: (ha-285481-m03)     <apic/>
	I0315 23:12:18.668908   92071 main.go:141] libmachine: (ha-285481-m03)     <pae/>
	I0315 23:12:18.668919   92071 main.go:141] libmachine: (ha-285481-m03)     
	I0315 23:12:18.668932   92071 main.go:141] libmachine: (ha-285481-m03)   </features>
	I0315 23:12:18.668948   92071 main.go:141] libmachine: (ha-285481-m03)   <cpu mode='host-passthrough'>
	I0315 23:12:18.668960   92071 main.go:141] libmachine: (ha-285481-m03)   
	I0315 23:12:18.668976   92071 main.go:141] libmachine: (ha-285481-m03)   </cpu>
	I0315 23:12:18.668987   92071 main.go:141] libmachine: (ha-285481-m03)   <os>
	I0315 23:12:18.668993   92071 main.go:141] libmachine: (ha-285481-m03)     <type>hvm</type>
	I0315 23:12:18.669000   92071 main.go:141] libmachine: (ha-285481-m03)     <boot dev='cdrom'/>
	I0315 23:12:18.669007   92071 main.go:141] libmachine: (ha-285481-m03)     <boot dev='hd'/>
	I0315 23:12:18.669019   92071 main.go:141] libmachine: (ha-285481-m03)     <bootmenu enable='no'/>
	I0315 23:12:18.669050   92071 main.go:141] libmachine: (ha-285481-m03)   </os>
	I0315 23:12:18.669061   92071 main.go:141] libmachine: (ha-285481-m03)   <devices>
	I0315 23:12:18.669077   92071 main.go:141] libmachine: (ha-285481-m03)     <disk type='file' device='cdrom'>
	I0315 23:12:18.669096   92071 main.go:141] libmachine: (ha-285481-m03)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/boot2docker.iso'/>
	I0315 23:12:18.669134   92071 main.go:141] libmachine: (ha-285481-m03)       <target dev='hdc' bus='scsi'/>
	I0315 23:12:18.669158   92071 main.go:141] libmachine: (ha-285481-m03)       <readonly/>
	I0315 23:12:18.669172   92071 main.go:141] libmachine: (ha-285481-m03)     </disk>
	I0315 23:12:18.669179   92071 main.go:141] libmachine: (ha-285481-m03)     <disk type='file' device='disk'>
	I0315 23:12:18.669194   92071 main.go:141] libmachine: (ha-285481-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:12:18.669211   92071 main.go:141] libmachine: (ha-285481-m03)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/ha-285481-m03.rawdisk'/>
	I0315 23:12:18.669227   92071 main.go:141] libmachine: (ha-285481-m03)       <target dev='hda' bus='virtio'/>
	I0315 23:12:18.669237   92071 main.go:141] libmachine: (ha-285481-m03)     </disk>
	I0315 23:12:18.669247   92071 main.go:141] libmachine: (ha-285481-m03)     <interface type='network'>
	I0315 23:12:18.669254   92071 main.go:141] libmachine: (ha-285481-m03)       <source network='mk-ha-285481'/>
	I0315 23:12:18.669263   92071 main.go:141] libmachine: (ha-285481-m03)       <model type='virtio'/>
	I0315 23:12:18.669268   92071 main.go:141] libmachine: (ha-285481-m03)     </interface>
	I0315 23:12:18.669276   92071 main.go:141] libmachine: (ha-285481-m03)     <interface type='network'>
	I0315 23:12:18.669287   92071 main.go:141] libmachine: (ha-285481-m03)       <source network='default'/>
	I0315 23:12:18.669295   92071 main.go:141] libmachine: (ha-285481-m03)       <model type='virtio'/>
	I0315 23:12:18.669300   92071 main.go:141] libmachine: (ha-285481-m03)     </interface>
	I0315 23:12:18.669305   92071 main.go:141] libmachine: (ha-285481-m03)     <serial type='pty'>
	I0315 23:12:18.669315   92071 main.go:141] libmachine: (ha-285481-m03)       <target port='0'/>
	I0315 23:12:18.669321   92071 main.go:141] libmachine: (ha-285481-m03)     </serial>
	I0315 23:12:18.669328   92071 main.go:141] libmachine: (ha-285481-m03)     <console type='pty'>
	I0315 23:12:18.669334   92071 main.go:141] libmachine: (ha-285481-m03)       <target type='serial' port='0'/>
	I0315 23:12:18.669340   92071 main.go:141] libmachine: (ha-285481-m03)     </console>
	I0315 23:12:18.669348   92071 main.go:141] libmachine: (ha-285481-m03)     <rng model='virtio'>
	I0315 23:12:18.669359   92071 main.go:141] libmachine: (ha-285481-m03)       <backend model='random'>/dev/random</backend>
	I0315 23:12:18.669367   92071 main.go:141] libmachine: (ha-285481-m03)     </rng>
	I0315 23:12:18.669381   92071 main.go:141] libmachine: (ha-285481-m03)     
	I0315 23:12:18.669404   92071 main.go:141] libmachine: (ha-285481-m03)     
	I0315 23:12:18.669422   92071 main.go:141] libmachine: (ha-285481-m03)   </devices>
	I0315 23:12:18.669430   92071 main.go:141] libmachine: (ha-285481-m03) </domain>
	I0315 23:12:18.669436   92071 main.go:141] libmachine: (ha-285481-m03) 
	I0315 23:12:18.676587   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:c7:8a:c5 in network default
	I0315 23:12:18.677150   92071 main.go:141] libmachine: (ha-285481-m03) Ensuring networks are active...
	I0315 23:12:18.677176   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:18.677861   92071 main.go:141] libmachine: (ha-285481-m03) Ensuring network default is active
	I0315 23:12:18.678097   92071 main.go:141] libmachine: (ha-285481-m03) Ensuring network mk-ha-285481 is active
	I0315 23:12:18.678390   92071 main.go:141] libmachine: (ha-285481-m03) Getting domain xml...
	I0315 23:12:18.679036   92071 main.go:141] libmachine: (ha-285481-m03) Creating domain...
	I0315 23:12:19.886501   92071 main.go:141] libmachine: (ha-285481-m03) Waiting to get IP...
	I0315 23:12:19.887495   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:19.887955   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:19.887982   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:19.887945   92758 retry.go:31] will retry after 294.942371ms: waiting for machine to come up
	I0315 23:12:20.184463   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:20.184973   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:20.185007   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:20.184934   92758 retry.go:31] will retry after 259.466564ms: waiting for machine to come up
	I0315 23:12:20.446542   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:20.447077   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:20.447104   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:20.446959   92758 retry.go:31] will retry after 423.883268ms: waiting for machine to come up
	I0315 23:12:20.872523   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:20.873052   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:20.873088   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:20.872999   92758 retry.go:31] will retry after 457.642128ms: waiting for machine to come up
	I0315 23:12:21.332692   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:21.333166   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:21.333200   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:21.333122   92758 retry.go:31] will retry after 759.65704ms: waiting for machine to come up
	I0315 23:12:22.094047   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:22.094587   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:22.094619   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:22.094522   92758 retry.go:31] will retry after 574.549303ms: waiting for machine to come up
	I0315 23:12:22.670205   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:22.670568   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:22.670594   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:22.670542   92758 retry.go:31] will retry after 797.984979ms: waiting for machine to come up
	I0315 23:12:23.469946   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:23.470310   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:23.470337   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:23.470277   92758 retry.go:31] will retry after 914.454189ms: waiting for machine to come up
	I0315 23:12:24.386053   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:24.386565   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:24.386598   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:24.386509   92758 retry.go:31] will retry after 1.507342364s: waiting for machine to come up
	I0315 23:12:25.896079   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:25.896558   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:25.896580   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:25.896506   92758 retry.go:31] will retry after 1.601064693s: waiting for machine to come up
	I0315 23:12:27.500415   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:27.500952   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:27.500983   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:27.500886   92758 retry.go:31] will retry after 1.881993459s: waiting for machine to come up
	I0315 23:12:29.384401   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:29.384831   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:29.384858   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:29.384769   92758 retry.go:31] will retry after 3.438780484s: waiting for machine to come up
	I0315 23:12:32.826689   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:32.827175   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:32.827205   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:32.827134   92758 retry.go:31] will retry after 3.812719047s: waiting for machine to come up
	I0315 23:12:36.644227   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:36.644595   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:36.644610   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:36.644571   92758 retry.go:31] will retry after 5.124301462s: waiting for machine to come up
	I0315 23:12:41.772352   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.772777   92071 main.go:141] libmachine: (ha-285481-m03) Found IP for machine: 192.168.39.248
	I0315 23:12:41.772798   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has current primary IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.772803   92071 main.go:141] libmachine: (ha-285481-m03) Reserving static IP address...
	I0315 23:12:41.773209   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find host DHCP lease matching {name: "ha-285481-m03", mac: "52:54:00:2c:2e:06", ip: "192.168.39.248"} in network mk-ha-285481
	I0315 23:12:41.847840   92071 main.go:141] libmachine: (ha-285481-m03) Reserved static IP address: 192.168.39.248
	I0315 23:12:41.847869   92071 main.go:141] libmachine: (ha-285481-m03) Waiting for SSH to be available...
	I0315 23:12:41.847879   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Getting to WaitForSSH function...
	I0315 23:12:41.850500   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.850948   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:41.850984   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.851202   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Using SSH client type: external
	I0315 23:12:41.851239   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa (-rw-------)
	I0315 23:12:41.851269   92071 main.go:141] libmachine: (ha-285481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 23:12:41.851288   92071 main.go:141] libmachine: (ha-285481-m03) DBG | About to run SSH command:
	I0315 23:12:41.851300   92071 main.go:141] libmachine: (ha-285481-m03) DBG | exit 0
	I0315 23:12:41.975575   92071 main.go:141] libmachine: (ha-285481-m03) DBG | SSH cmd err, output: <nil>: 
	I0315 23:12:41.975889   92071 main.go:141] libmachine: (ha-285481-m03) KVM machine creation complete!
	I0315 23:12:41.976173   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetConfigRaw
	I0315 23:12:41.976905   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:41.977130   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:41.977312   92071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 23:12:41.977329   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:12:41.978677   92071 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 23:12:41.978691   92071 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 23:12:41.978697   92071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 23:12:41.978706   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:41.980947   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.981287   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:41.981309   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.981416   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:41.981595   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:41.981752   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:41.981890   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:41.982082   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:41.982315   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:41.982326   92071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 23:12:42.086831   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:12:42.086873   92071 main.go:141] libmachine: Detecting the provisioner...
	I0315 23:12:42.086885   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.089594   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.090029   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.090061   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.090193   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.090394   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.090536   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.090729   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.090934   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.091132   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.091145   92071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 23:12:42.204339   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 23:12:42.204411   92071 main.go:141] libmachine: found compatible host: buildroot
	I0315 23:12:42.204426   92071 main.go:141] libmachine: Provisioning with buildroot...
	I0315 23:12:42.204453   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:42.204724   92071 buildroot.go:166] provisioning hostname "ha-285481-m03"
	I0315 23:12:42.204757   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:42.204958   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.207496   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.207839   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.207872   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.208028   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.208207   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.208341   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.208458   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.208649   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.208853   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.208872   92071 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481-m03 && echo "ha-285481-m03" | sudo tee /etc/hostname
	I0315 23:12:42.332039   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481-m03
	
	I0315 23:12:42.332063   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.335108   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.335524   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.335548   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.335719   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.335919   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.336109   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.336245   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.336434   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.336650   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.336670   92071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:12:42.453567   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:12:42.453608   92071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:12:42.453632   92071 buildroot.go:174] setting up certificates
	I0315 23:12:42.453688   92071 provision.go:84] configureAuth start
	I0315 23:12:42.453703   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:42.453989   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:42.456785   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.457200   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.457237   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.457343   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.459502   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.459827   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.459850   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.460039   92071 provision.go:143] copyHostCerts
	I0315 23:12:42.460072   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:12:42.460116   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:12:42.460129   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:12:42.460223   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:12:42.460359   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:12:42.460386   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:12:42.460396   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:12:42.460441   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:12:42.460515   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:12:42.460538   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:12:42.460548   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:12:42.460583   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:12:42.460664   92071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481-m03 san=[127.0.0.1 192.168.39.248 ha-285481-m03 localhost minikube]
	I0315 23:12:42.577057   92071 provision.go:177] copyRemoteCerts
	I0315 23:12:42.577137   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:12:42.577163   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.579866   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.580226   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.580258   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.580500   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.580737   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.580912   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.581055   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:42.670738   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:12:42.670838   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:12:42.700340   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:12:42.700452   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 23:12:42.727635   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:12:42.727733   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:12:42.755424   92071 provision.go:87] duration metric: took 301.717801ms to configureAuth
	I0315 23:12:42.755460   92071 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:12:42.755758   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:42.755860   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.758358   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.758739   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.758768   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.758970   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.759174   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.759359   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.759552   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.759722   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.759892   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.759907   92071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:12:43.060428   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:12:43.060465   92071 main.go:141] libmachine: Checking connection to Docker...
	I0315 23:12:43.060477   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetURL
	I0315 23:12:43.061826   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Using libvirt version 6000000
	I0315 23:12:43.064630   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.065109   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.065143   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.065324   92071 main.go:141] libmachine: Docker is up and running!
	I0315 23:12:43.065347   92071 main.go:141] libmachine: Reticulating splines...
	I0315 23:12:43.065356   92071 client.go:171] duration metric: took 24.866995333s to LocalClient.Create
	I0315 23:12:43.065385   92071 start.go:167] duration metric: took 24.867062069s to libmachine.API.Create "ha-285481"
	I0315 23:12:43.065397   92071 start.go:293] postStartSetup for "ha-285481-m03" (driver="kvm2")
	I0315 23:12:43.065410   92071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:12:43.065432   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.065692   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:12:43.065726   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:43.067982   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.068366   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.068397   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.068508   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.068707   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.068884   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.069026   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:43.158841   92071 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:12:43.163346   92071 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:12:43.163375   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:12:43.163438   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:12:43.163505   92071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:12:43.163517   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:12:43.163599   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:12:43.174569   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:12:43.201998   92071 start.go:296] duration metric: took 136.583437ms for postStartSetup
	I0315 23:12:43.202064   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetConfigRaw
	I0315 23:12:43.202632   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:43.206085   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.206573   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.206606   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.206912   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:12:43.207142   92071 start.go:128] duration metric: took 25.027219533s to createHost
	I0315 23:12:43.207171   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:43.209281   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.209601   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.209632   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.209789   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.209987   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.210182   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.210342   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.210489   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:43.210684   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:43.210700   92071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:12:43.324346   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544363.307920175
	
	I0315 23:12:43.324376   92071 fix.go:216] guest clock: 1710544363.307920175
	I0315 23:12:43.324387   92071 fix.go:229] Guest: 2024-03-15 23:12:43.307920175 +0000 UTC Remote: 2024-03-15 23:12:43.207158104 +0000 UTC m=+167.425486209 (delta=100.762071ms)
	I0315 23:12:43.324408   92071 fix.go:200] guest clock delta is within tolerance: 100.762071ms
	I0315 23:12:43.324415   92071 start.go:83] releasing machines lock for "ha-285481-m03", held for 25.144613516s
	I0315 23:12:43.324441   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.324747   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:43.327799   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.328213   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.328238   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.331426   92071 out.go:177] * Found network options:
	I0315 23:12:43.333077   92071 out.go:177]   - NO_PROXY=192.168.39.23,192.168.39.201
	W0315 23:12:43.334556   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 23:12:43.334575   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:12:43.334592   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.335192   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.335431   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.335551   92071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:12:43.335592   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	W0315 23:12:43.335622   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 23:12:43.335646   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:12:43.335724   92071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:12:43.335751   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:43.338519   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.338731   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.338948   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.338990   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.339179   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.339212   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.339251   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.339452   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.339491   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.339665   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.339686   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.339791   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.339960   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:43.339971   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:43.577337   92071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:12:43.584594   92071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:12:43.584660   92071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:12:43.603148   92071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 23:12:43.603176   92071 start.go:494] detecting cgroup driver to use...
	I0315 23:12:43.603254   92071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:12:43.620843   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:12:43.635416   92071 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:12:43.635492   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:12:43.650382   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:12:43.664227   92071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:12:43.795432   92071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:12:43.967211   92071 docker.go:233] disabling docker service ...
	I0315 23:12:43.967298   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:12:43.984855   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:12:43.999121   92071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:12:44.127393   92071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:12:44.257425   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:12:44.273058   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:12:44.292407   92071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:12:44.292480   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.303136   92071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:12:44.303205   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.314595   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.326671   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.338689   92071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:12:44.350004   92071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:12:44.360073   92071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 23:12:44.360137   92071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 23:12:44.374552   92071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:12:44.386155   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:12:44.527162   92071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:12:44.675315   92071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:12:44.675420   92071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:12:44.680423   92071 start.go:562] Will wait 60s for crictl version
	I0315 23:12:44.680486   92071 ssh_runner.go:195] Run: which crictl
	I0315 23:12:44.684546   92071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:12:44.722943   92071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:12:44.723021   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:12:44.755906   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:12:44.792822   92071 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:12:44.794529   92071 out.go:177]   - env NO_PROXY=192.168.39.23
	I0315 23:12:44.796178   92071 out.go:177]   - env NO_PROXY=192.168.39.23,192.168.39.201
	I0315 23:12:44.797748   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:44.800798   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:44.801303   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:44.801335   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:44.801533   92071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:12:44.806349   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:12:44.821209   92071 mustload.go:65] Loading cluster: ha-285481
	I0315 23:12:44.821473   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:44.821790   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:44.821857   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:44.836729   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35509
	I0315 23:12:44.837210   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:44.837715   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:44.837737   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:44.838055   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:44.838223   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:12:44.839734   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:12:44.840018   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:44.840056   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:44.854233   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0315 23:12:44.854722   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:44.855190   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:44.855237   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:44.855612   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:44.855825   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:12:44.856014   92071 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.248
	I0315 23:12:44.856026   92071 certs.go:194] generating shared ca certs ...
	I0315 23:12:44.856045   92071 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:12:44.856177   92071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:12:44.856221   92071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:12:44.856237   92071 certs.go:256] generating profile certs ...
	I0315 23:12:44.856303   92071 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:12:44.856327   92071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee
	I0315 23:12:44.856341   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.201 192.168.39.248 192.168.39.254]
	I0315 23:12:45.085122   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee ...
	I0315 23:12:45.085159   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee: {Name:mk207d01a1ed1f040cd6a8eb5e410f01a685be92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:12:45.085339   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee ...
	I0315 23:12:45.085352   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee: {Name:mkd9e113f45cebab606d7ca0da3b1251ca4d3330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:12:45.085431   92071 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:12:45.085559   92071 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:12:45.085708   92071 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:12:45.085728   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:12:45.085745   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:12:45.085765   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:12:45.085782   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:12:45.085796   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:12:45.085812   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:12:45.085824   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:12:45.085839   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:12:45.085901   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:12:45.085942   92071 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:12:45.085957   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:12:45.085987   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:12:45.086012   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:12:45.086044   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:12:45.086099   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:12:45.086137   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.086164   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.086183   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.086222   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:12:45.089898   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:45.090358   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:12:45.090379   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:45.090583   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:12:45.090785   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:12:45.091000   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:12:45.091172   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:12:45.163734   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 23:12:45.169808   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 23:12:45.183848   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 23:12:45.188463   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 23:12:45.200689   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 23:12:45.205131   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 23:12:45.216757   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 23:12:45.221251   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0315 23:12:45.233959   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 23:12:45.238897   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 23:12:45.251083   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 23:12:45.255533   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0315 23:12:45.266981   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:12:45.294449   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:12:45.320923   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:12:45.346156   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:12:45.371713   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0315 23:12:45.398008   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 23:12:45.426230   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:12:45.452890   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:12:45.480419   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:12:45.505969   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:12:45.532200   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:12:45.559949   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 23:12:45.577560   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 23:12:45.596317   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 23:12:45.615105   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0315 23:12:45.633463   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 23:12:45.651378   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0315 23:12:45.669053   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 23:12:45.687106   92071 ssh_runner.go:195] Run: openssl version
	I0315 23:12:45.693315   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:12:45.705279   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.710090   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.710152   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.716237   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:12:45.728450   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:12:45.740118   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.745248   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.745304   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.751304   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:12:45.762370   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:12:45.776861   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.782531   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.782602   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.788994   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:12:45.800920   92071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:12:45.805284   92071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 23:12:45.805343   92071 kubeadm.go:928] updating node {m03 192.168.39.248 8443 v1.28.4 crio true true} ...
	I0315 23:12:45.805445   92071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:12:45.805471   92071 kube-vip.go:111] generating kube-vip config ...
	I0315 23:12:45.805509   92071 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:12:45.824239   92071 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:12:45.824321   92071 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:12:45.824387   92071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:12:45.835730   92071 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 23:12:45.835805   92071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 23:12:45.846693   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0315 23:12:45.846730   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0315 23:12:45.846745   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 23:12:45.846753   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:12:45.846754   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:12:45.846760   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:12:45.846821   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:12:45.846829   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:12:45.866657   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:12:45.866728   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 23:12:45.866758   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:12:45.866762   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 23:12:45.866760   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 23:12:45.866788   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 23:12:45.902504   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 23:12:45.902549   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 23:12:46.883210   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 23:12:46.893254   92071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0315 23:12:46.911175   92071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:12:46.929189   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:12:46.946418   92071 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:12:46.950656   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:12:46.964628   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:12:47.081272   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:12:47.101634   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:12:47.102110   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:47.102165   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:47.117619   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0315 23:12:47.118076   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:47.118683   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:47.118716   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:47.119093   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:47.119348   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:12:47.119517   92071 start.go:316] joinCluster: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:12:47.119690   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 23:12:47.119715   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:12:47.123547   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:47.124035   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:12:47.124062   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:47.124249   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:12:47.124454   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:12:47.124654   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:12:47.124849   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:12:47.298363   92071 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:12:47.298421   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token om5jb8.iwf3rk95i3babp1m --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m03 --control-plane --apiserver-advertise-address=192.168.39.248 --apiserver-bind-port=8443"
	I0315 23:13:13.853139   92071 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token om5jb8.iwf3rk95i3babp1m --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m03 --control-plane --apiserver-advertise-address=192.168.39.248 --apiserver-bind-port=8443": (26.554684226s)
	I0315 23:13:13.853180   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 23:13:14.399489   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-285481-m03 minikube.k8s.io/updated_at=2024_03_15T23_13_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=ha-285481 minikube.k8s.io/primary=false
	I0315 23:13:14.530391   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-285481-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 23:13:14.693280   92071 start.go:318] duration metric: took 27.573759647s to joinCluster
	I0315 23:13:14.693371   92071 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:13:14.694896   92071 out.go:177] * Verifying Kubernetes components...
	I0315 23:13:14.693881   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:13:14.696469   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:13:14.884831   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:13:14.904693   92071 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:13:14.904986   92071 kapi.go:59] client config for ha-285481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt", KeyFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key", CAFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 23:13:14.905060   92071 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.23:8443
	I0315 23:13:14.905270   92071 node_ready.go:35] waiting up to 6m0s for node "ha-285481-m03" to be "Ready" ...
	I0315 23:13:14.905367   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:14.905374   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:14.905382   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:14.905387   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:14.909792   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:15.406319   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:15.406340   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:15.406347   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:15.406351   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:15.416402   92071 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 23:13:15.906127   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:15.906172   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:15.906187   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:15.906192   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:15.916700   92071 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 23:13:16.406323   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:16.406343   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:16.406351   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:16.406356   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:16.410832   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:16.905550   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:16.905573   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:16.905581   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:16.905586   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:16.912053   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:16.912931   92071 node_ready.go:53] node "ha-285481-m03" has status "Ready":"False"
	I0315 23:13:17.406253   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:17.406280   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:17.406291   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:17.406299   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:17.412392   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:17.905680   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:17.905708   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:17.905720   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:17.905726   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:17.909546   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:18.406047   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:18.406068   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:18.406076   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:18.406081   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:18.409875   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:18.905910   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:18.905938   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:18.905952   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:18.905957   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:18.909851   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:19.405728   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:19.405751   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:19.405759   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:19.405764   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:19.410731   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:19.411392   92071 node_ready.go:53] node "ha-285481-m03" has status "Ready":"False"
	I0315 23:13:19.905626   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:19.905652   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:19.905660   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:19.905665   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:19.910162   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:20.405798   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:20.405825   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:20.405848   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:20.405854   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:20.409891   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:20.906021   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:20.906051   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:20.906060   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:20.906064   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:20.912040   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:21.405744   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:21.405770   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.405781   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.405786   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.410282   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:21.410912   92071 node_ready.go:49] node "ha-285481-m03" has status "Ready":"True"
	I0315 23:13:21.410928   92071 node_ready.go:38] duration metric: took 6.505644255s for node "ha-285481-m03" to be "Ready" ...
	I0315 23:13:21.410937   92071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:13:21.410997   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:21.411006   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.411013   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.411018   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.425810   92071 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0315 23:13:21.433857   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.433944   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9c44k
	I0315 23:13:21.433952   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.433960   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.433966   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.437935   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.438523   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:21.438538   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.438545   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.438549   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.442057   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.442526   92071 pod_ready.go:92] pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.442544   92071 pod_ready.go:81] duration metric: took 8.662575ms for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.442553   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.442615   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qxtp4
	I0315 23:13:21.442623   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.442629   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.442633   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.446399   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.447284   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:21.447308   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.447336   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.447346   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.450462   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.451291   92071 pod_ready.go:92] pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.451314   92071 pod_ready.go:81] duration metric: took 8.75368ms for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.451350   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.451430   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481
	I0315 23:13:21.451439   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.451446   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.451449   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.454754   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.455344   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:21.455362   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.455373   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.455379   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.459946   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:21.460422   92071 pod_ready.go:92] pod "etcd-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.460444   92071 pod_ready.go:81] duration metric: took 9.081809ms for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.460457   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.460534   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m02
	I0315 23:13:21.460545   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.460555   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.460562   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.464643   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:21.465631   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:21.465649   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.465659   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.465664   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.468753   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.469214   92071 pod_ready.go:92] pod "etcd-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.469230   92071 pod_ready.go:81] duration metric: took 8.765821ms for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.469239   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.606680   92071 request.go:629] Waited for 137.362155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:21.606755   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:21.606763   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.606771   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.606777   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.610339   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.806487   92071 request.go:629] Waited for 195.392779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:21.806569   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:21.806578   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.806585   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.806589   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.810198   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:22.006461   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:22.006488   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.006499   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.006507   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.012164   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:22.206607   92071 request.go:629] Waited for 192.345896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.206666   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.206671   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.206679   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.206688   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.210395   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:22.469813   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:22.469845   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.469857   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.469862   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.481010   92071 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0315 23:13:22.605900   92071 request.go:629] Waited for 124.238103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.605977   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.605985   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.605995   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.606002   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.609903   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:22.970138   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:22.970163   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.970174   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.970179   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.974094   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.006024   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:23.006047   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.006056   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.006062   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.009489   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.470182   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:23.470205   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.470212   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.470216   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.474202   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.475138   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:23.475155   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.475162   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.475166   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.478201   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.478895   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:23.970179   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:23.970202   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.970210   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.970213   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.974695   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:23.975281   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:23.975296   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.975304   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.975308   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.978264   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:24.469945   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:24.469975   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.469987   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.469993   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.475144   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:24.475859   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:24.475877   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.475887   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.475890   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.479859   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:24.970199   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:24.970224   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.970233   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.970238   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.973947   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:24.974851   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:24.974866   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.974873   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.974876   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.978718   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:25.469505   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:25.469530   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.469540   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.469546   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.474138   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:25.474744   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:25.474762   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.474773   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.474780   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.490305   92071 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0315 23:13:25.491799   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:25.969880   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:25.969904   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.969913   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.969916   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.973908   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:25.974609   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:25.974624   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.974634   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.974641   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.977966   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:26.469864   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:26.469889   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.469898   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.469903   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.475271   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:26.475978   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:26.475999   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.476010   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.476019   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.479931   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:26.969877   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:26.969902   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.969911   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.969915   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.973743   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:26.974506   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:26.974517   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.974525   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.974529   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.977528   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:27.469628   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:27.469659   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.469670   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.469677   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.474113   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:27.474830   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:27.474847   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.474854   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.474858   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.477892   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:27.970275   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:27.970298   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.970305   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.970308   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.974859   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:27.975700   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:27.975718   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.975725   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.975730   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.979337   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:27.979923   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:28.469657   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:28.469684   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.469694   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.469701   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.473613   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:28.474430   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:28.474454   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.474464   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.474471   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.477714   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:28.969713   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:28.969734   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.969743   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.969746   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.976616   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:28.977487   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:28.977505   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.977511   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.977516   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.981200   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:29.469819   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:29.469847   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.469860   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.469866   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.474255   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:29.475160   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:29.475174   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.475185   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.475192   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.478495   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:29.970193   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:29.970220   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.970231   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.970236   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.974520   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:29.975072   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:29.975087   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.975097   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.975104   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.978480   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:30.470360   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:30.470382   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.470391   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.470396   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.474753   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:30.475643   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:30.475660   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.475671   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.475677   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.478940   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:30.479779   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:30.969664   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:30.969687   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.969695   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.969701   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.973628   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:30.974296   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:30.974312   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.974319   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.974324   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.977371   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.470449   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:31.470470   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.470479   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.470483   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.474456   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.475443   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:31.475462   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.475471   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.475476   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.478883   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.479473   92071 pod_ready.go:92] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.479494   92071 pod_ready.go:81] duration metric: took 10.010248114s for pod "etcd-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.479512   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.479580   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481
	I0315 23:13:31.479590   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.479597   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.479601   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.482501   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:31.483380   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:31.483395   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.483405   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.483410   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.487247   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.487889   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.487909   92071 pod_ready.go:81] duration metric: took 8.390404ms for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.487918   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.487970   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m02
	I0315 23:13:31.487978   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.487985   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.487990   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.490837   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:31.491425   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:31.491438   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.491448   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.491458   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.494944   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.495674   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.495697   92071 pod_ready.go:81] duration metric: took 7.770928ms for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.495709   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.495763   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m03
	I0315 23:13:31.495774   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.495784   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.495790   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.499723   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.500755   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:31.500768   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.500775   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.500779   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.503461   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:31.503843   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.503861   92071 pod_ready.go:81] duration metric: took 8.14488ms for pod "kube-apiserver-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.503869   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.503940   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481
	I0315 23:13:31.503953   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.503963   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.503973   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.508754   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:31.509610   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:31.509623   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.509629   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.509632   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.513069   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.513615   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.513634   92071 pod_ready.go:81] duration metric: took 9.75855ms for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.513643   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.671055   92071 request.go:629] Waited for 157.312221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:13:31.671128   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:13:31.671136   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.671146   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.671159   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.675746   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:31.870935   92071 request.go:629] Waited for 194.363099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:31.871014   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:31.871021   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.871029   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.871037   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.874726   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.875426   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.875444   92071 pod_ready.go:81] duration metric: took 361.795409ms for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.875455   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.070769   92071 request.go:629] Waited for 195.244188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m03
	I0315 23:13:32.070861   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m03
	I0315 23:13:32.070873   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.070886   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.070897   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.074571   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:32.270812   92071 request.go:629] Waited for 195.382785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:32.270876   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:32.270881   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.270890   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.270897   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.276048   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:32.276742   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:32.276760   92071 pod_ready.go:81] duration metric: took 401.298691ms for pod "kube-controller-manager-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.276770   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.470907   92071 request.go:629] Waited for 194.045862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:13:32.470965   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:13:32.470971   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.470978   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.470983   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.474865   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:32.671180   92071 request.go:629] Waited for 195.397732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:32.671277   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:32.671284   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.671291   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.671295   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.675279   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:32.676042   92071 pod_ready.go:92] pod "kube-proxy-2hcgt" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:32.676066   92071 pod_ready.go:81] duration metric: took 399.288159ms for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.676092   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.871360   92071 request.go:629] Waited for 195.149955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:13:32.871443   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:13:32.871458   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.871467   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.871478   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.875046   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.070617   92071 request.go:629] Waited for 194.285892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.070680   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.070687   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.070696   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.070706   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.074257   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.074924   92071 pod_ready.go:92] pod "kube-proxy-cml9m" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:33.074939   92071 pod_ready.go:81] duration metric: took 398.836861ms for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.074950   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d2fjd" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.270679   92071 request.go:629] Waited for 195.647313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d2fjd
	I0315 23:13:33.270751   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d2fjd
	I0315 23:13:33.270763   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.270770   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.270775   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.275149   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:33.471378   92071 request.go:629] Waited for 195.368272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:33.471438   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:33.471443   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.471450   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.471455   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.475163   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.475904   92071 pod_ready.go:92] pod "kube-proxy-d2fjd" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:33.475925   92071 pod_ready.go:81] duration metric: took 400.969148ms for pod "kube-proxy-d2fjd" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.475938   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.671012   92071 request.go:629] Waited for 194.984048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:13:33.671071   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:13:33.671081   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.671089   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.671094   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.674657   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.870607   92071 request.go:629] Waited for 195.29247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.870671   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.870676   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.870684   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.870691   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.874711   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.875378   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:33.875397   92071 pod_ready.go:81] duration metric: took 399.450919ms for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.875408   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.071414   92071 request.go:629] Waited for 195.915601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:13:34.071495   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:13:34.071501   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.071508   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.071513   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.075507   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:34.270517   92071 request.go:629] Waited for 194.285622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:34.270580   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:34.270585   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.270594   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.270602   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.275216   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:34.277040   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:34.277067   92071 pod_ready.go:81] duration metric: took 401.647615ms for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.277081   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.471099   92071 request.go:629] Waited for 193.936989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m03
	I0315 23:13:34.471201   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m03
	I0315 23:13:34.471213   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.471224   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.471234   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.474997   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:34.670683   92071 request.go:629] Waited for 194.792633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:34.670761   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:34.670766   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.670774   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.670778   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.674900   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:34.675780   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:34.675801   92071 pod_ready.go:81] duration metric: took 398.711684ms for pod "kube-scheduler-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.675816   92071 pod_ready.go:38] duration metric: took 13.264863296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:13:34.675836   92071 api_server.go:52] waiting for apiserver process to appear ...
	I0315 23:13:34.675902   92071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:13:34.693422   92071 api_server.go:72] duration metric: took 20.000003233s to wait for apiserver process to appear ...
	I0315 23:13:34.693457   92071 api_server.go:88] waiting for apiserver healthz status ...
	I0315 23:13:34.693481   92071 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I0315 23:13:34.698575   92071 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I0315 23:13:34.698681   92071 round_trippers.go:463] GET https://192.168.39.23:8443/version
	I0315 23:13:34.698693   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.698703   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.698714   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.699919   92071 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0315 23:13:34.699980   92071 api_server.go:141] control plane version: v1.28.4
	I0315 23:13:34.699995   92071 api_server.go:131] duration metric: took 6.532004ms to wait for apiserver health ...
	I0315 23:13:34.700006   92071 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 23:13:34.871282   92071 request.go:629] Waited for 171.202632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:34.871380   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:34.871389   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.871397   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.871404   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.879795   92071 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0315 23:13:34.885854   92071 system_pods.go:59] 24 kube-system pods found
	I0315 23:13:34.885883   92071 system_pods.go:61] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:13:34.885889   92071 system_pods.go:61] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:13:34.885892   92071 system_pods.go:61] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:13:34.885896   92071 system_pods.go:61] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:13:34.885899   92071 system_pods.go:61] "etcd-ha-285481-m03" [675ae74e-7e71-4fee-b8c1-b6b757a95643] Running
	I0315 23:13:34.885902   92071 system_pods.go:61] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:13:34.885904   92071 system_pods.go:61] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:13:34.885907   92071 system_pods.go:61] "kindnet-zptcr" [901a115d-b255-473b-8f60-236d2bead302] Running
	I0315 23:13:34.885911   92071 system_pods.go:61] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:13:34.885914   92071 system_pods.go:61] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:13:34.885917   92071 system_pods.go:61] "kube-apiserver-ha-285481-m03" [1bf2f928-6d7b-4b8a-bcb9-8f0120766edf] Running
	I0315 23:13:34.885920   92071 system_pods.go:61] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:13:34.885927   92071 system_pods.go:61] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:13:34.885930   92071 system_pods.go:61] "kube-controller-manager-ha-285481-m03" [974871d4-bf77-48d6-b5b0-2315381e40f0] Running
	I0315 23:13:34.885933   92071 system_pods.go:61] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:13:34.885935   92071 system_pods.go:61] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:13:34.885940   92071 system_pods.go:61] "kube-proxy-d2fjd" [d2fc9b42-7c35-4472-a8de-4f5dafe9d208] Running
	I0315 23:13:34.885943   92071 system_pods.go:61] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:13:34.885946   92071 system_pods.go:61] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:13:34.885948   92071 system_pods.go:61] "kube-scheduler-ha-285481-m03" [5b522c93-1875-436c-a84f-7a71b1a694f6] Running
	I0315 23:13:34.885951   92071 system_pods.go:61] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:13:34.885954   92071 system_pods.go:61] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:13:34.885957   92071 system_pods.go:61] "kube-vip-ha-285481-m03" [c73a666c-3bb1-4b8c-becc-574021feab19] Running
	I0315 23:13:34.885960   92071 system_pods.go:61] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:13:34.885966   92071 system_pods.go:74] duration metric: took 185.952256ms to wait for pod list to return data ...
	I0315 23:13:34.885979   92071 default_sa.go:34] waiting for default service account to be created ...
	I0315 23:13:35.071406   92071 request.go:629] Waited for 185.343202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:13:35.071478   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:13:35.071485   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:35.071494   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:35.071503   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:35.075603   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:35.075754   92071 default_sa.go:45] found service account: "default"
	I0315 23:13:35.075772   92071 default_sa.go:55] duration metric: took 189.785271ms for default service account to be created ...
	I0315 23:13:35.075799   92071 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 23:13:35.271363   92071 request.go:629] Waited for 195.456894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:35.271436   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:35.271443   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:35.271453   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:35.271470   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:35.278347   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:35.284755   92071 system_pods.go:86] 24 kube-system pods found
	I0315 23:13:35.284786   92071 system_pods.go:89] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:13:35.284792   92071 system_pods.go:89] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:13:35.284796   92071 system_pods.go:89] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:13:35.284800   92071 system_pods.go:89] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:13:35.284804   92071 system_pods.go:89] "etcd-ha-285481-m03" [675ae74e-7e71-4fee-b8c1-b6b757a95643] Running
	I0315 23:13:35.284808   92071 system_pods.go:89] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:13:35.284812   92071 system_pods.go:89] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:13:35.284816   92071 system_pods.go:89] "kindnet-zptcr" [901a115d-b255-473b-8f60-236d2bead302] Running
	I0315 23:13:35.284820   92071 system_pods.go:89] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:13:35.284824   92071 system_pods.go:89] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:13:35.284828   92071 system_pods.go:89] "kube-apiserver-ha-285481-m03" [1bf2f928-6d7b-4b8a-bcb9-8f0120766edf] Running
	I0315 23:13:35.284833   92071 system_pods.go:89] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:13:35.284837   92071 system_pods.go:89] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:13:35.284842   92071 system_pods.go:89] "kube-controller-manager-ha-285481-m03" [974871d4-bf77-48d6-b5b0-2315381e40f0] Running
	I0315 23:13:35.284848   92071 system_pods.go:89] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:13:35.284852   92071 system_pods.go:89] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:13:35.284856   92071 system_pods.go:89] "kube-proxy-d2fjd" [d2fc9b42-7c35-4472-a8de-4f5dafe9d208] Running
	I0315 23:13:35.284860   92071 system_pods.go:89] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:13:35.284872   92071 system_pods.go:89] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:13:35.284886   92071 system_pods.go:89] "kube-scheduler-ha-285481-m03" [5b522c93-1875-436c-a84f-7a71b1a694f6] Running
	I0315 23:13:35.284892   92071 system_pods.go:89] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:13:35.284897   92071 system_pods.go:89] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:13:35.284906   92071 system_pods.go:89] "kube-vip-ha-285481-m03" [c73a666c-3bb1-4b8c-becc-574021feab19] Running
	I0315 23:13:35.284912   92071 system_pods.go:89] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:13:35.284925   92071 system_pods.go:126] duration metric: took 209.11843ms to wait for k8s-apps to be running ...
	I0315 23:13:35.284938   92071 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 23:13:35.284996   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:13:35.301163   92071 system_svc.go:56] duration metric: took 16.210788ms WaitForService to wait for kubelet
	I0315 23:13:35.301196   92071 kubeadm.go:576] duration metric: took 20.607784825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:13:35.301216   92071 node_conditions.go:102] verifying NodePressure condition ...
	I0315 23:13:35.470546   92071 request.go:629] Waited for 169.255647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes
	I0315 23:13:35.470626   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes
	I0315 23:13:35.470633   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:35.470643   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:35.470650   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:35.476349   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:35.478398   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:13:35.478420   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:13:35.478434   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:13:35.478439   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:13:35.478444   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:13:35.478448   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:13:35.478454   92071 node_conditions.go:105] duration metric: took 177.232968ms to run NodePressure ...
	I0315 23:13:35.478470   92071 start.go:240] waiting for startup goroutines ...
	I0315 23:13:35.478529   92071 start.go:254] writing updated cluster config ...
	I0315 23:13:35.478854   92071 ssh_runner.go:195] Run: rm -f paused
	I0315 23:13:35.533308   92071 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0315 23:13:35.536411   92071 out.go:177] * Done! kubectl is now configured to use "ha-285481" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.843005359Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544626842980656,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5a8d1ac8-ac5c-402b-b7c6-3510106ffed2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.843886591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=45b5c17e-f120-482f-b6ab-cf81dd92d1e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.843960355Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=45b5c17e-f120-482f-b6ab-cf81dd92d1e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.844338058Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=45b5c17e-f120-482f-b6ab-cf81dd92d1e0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.898416212Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3f8fe5ad-2442-41d2-a653-f1dbb4d9ed2a name=/runtime.v1.RuntimeService/Version
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.898514952Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3f8fe5ad-2442-41d2-a653-f1dbb4d9ed2a name=/runtime.v1.RuntimeService/Version
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.900109788Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a04124d5-4528-46da-b883-4643901207e5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.900551169Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544626900528013,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a04124d5-4528-46da-b883-4643901207e5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.901286097Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d302dd6d-8375-4084-aea1-ac4044d54b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.901355874Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d302dd6d-8375-4084-aea1-ac4044d54b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.901683796Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d302dd6d-8375-4084-aea1-ac4044d54b42 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.942017834Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=28268781-8fb0-4633-b1d9-ec391ba862ce name=/runtime.v1.RuntimeService/Version
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.942110771Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=28268781-8fb0-4633-b1d9-ec391ba862ce name=/runtime.v1.RuntimeService/Version
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.943983345Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=74c265dc-bffb-468d-a1b9-1ddc85edbe2c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.944456399Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544626944434077,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=74c265dc-bffb-468d-a1b9-1ddc85edbe2c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.944932422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5678b62f-155f-430a-b11e-25e01c53f60d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.945012757Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5678b62f-155f-430a-b11e-25e01c53f60d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.945291223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5678b62f-155f-430a-b11e-25e01c53f60d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.984485636Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4810a5ba-27ac-4c57-97ef-6afbed29dcce name=/runtime.v1.RuntimeService/Version
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.984577463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4810a5ba-27ac-4c57-97ef-6afbed29dcce name=/runtime.v1.RuntimeService/Version
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.986408021Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=02df6d9d-347a-456c-9cc5-c5460271dad6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.987098762Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544626987025193,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=02df6d9d-347a-456c-9cc5-c5460271dad6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.988038162Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a35b68a4-dabe-4d39-a9a9-bca6594efe57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.988414204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a35b68a4-dabe-4d39-a9a9-bca6594efe57 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:17:06 ha-285481 crio[682]: time="2024-03-15 23:17:06.988731939Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a35b68a4-dabe-4d39-a9a9-bca6594efe57 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e21f8e6f1787       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   8857e9f8aa447       busybox-5b5d89c9d6-klvd7
	213c94783e488       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  1                   0a7887e08f455       kube-vip-ha-285481
	3a3c057a91d21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       1                   fef4071ee48b7       storage-provisioner
	3f54e9bdd6145       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   de78c4c3104b5       coredns-5dd5756b68-9c44k
	8e6eb1af2d4d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Exited              storage-provisioner       0                   fef4071ee48b7       storage-provisioner
	46eabb63fd66f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   99ea9ee0a5c9b       coredns-5dd5756b68-qxtp4
	047f19229a080       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   2cfe44ef27150       kindnet-9fd6f
	e7c7732963470       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   5404a98a681ea       kube-proxy-cml9m
	66a2aef00e4d9       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     6 minutes ago       Exited              kube-vip                  0                   0a7887e08f455       kube-vip-ha-285481
	bc2a1703be0ef       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   153d2f487f07d       kube-apiserver-ha-285481
	b1799ad1e14d3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   9a7f75d914382       kube-scheduler-ha-285481
	122f4a81c61ff       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   52b4af4847f8c       kube-controller-manager-ha-285481
	a6eaa3307ddf1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   8e777ceb1c377       etcd-ha-285481
	
	
	==> coredns [3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53] <==
	[INFO] 10.244.1.2:60720 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000102247s
	[INFO] 10.244.2.2:40209 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212669s
	[INFO] 10.244.2.2:50711 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188088s
	[INFO] 10.244.2.2:54282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186987s
	[INFO] 10.244.2.2:56388 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0028159s
	[INFO] 10.244.2.2:35533 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000228914s
	[INFO] 10.244.0.4:60496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116771s
	[INFO] 10.244.0.4:52905 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137131s
	[INFO] 10.244.0.4:56100 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107905s
	[INFO] 10.244.0.4:35690 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001556122s
	[INFO] 10.244.0.4:38982 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024934s
	[INFO] 10.244.1.2:41443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153641s
	[INFO] 10.244.1.2:38021 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125707s
	[INFO] 10.244.1.2:54662 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104577s
	[INFO] 10.244.1.2:58084 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186921s
	[INFO] 10.244.2.2:43382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132733s
	[INFO] 10.244.2.2:34481 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081396s
	[INFO] 10.244.0.4:49529 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097682s
	[INFO] 10.244.0.4:53261 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080312s
	[INFO] 10.244.1.2:48803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148992s
	[INFO] 10.244.1.2:55840 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107531s
	[INFO] 10.244.2.2:34212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003902135s
	[INFO] 10.244.0.4:33277 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128105s
	[INFO] 10.244.0.4:48728 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114263s
	[INFO] 10.244.1.2:37155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021268s
	
	
	==> coredns [46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316] <==
	[INFO] 10.244.1.2:33779 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001884513s
	[INFO] 10.244.2.2:59691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00342241s
	[INFO] 10.244.2.2:39895 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176934s
	[INFO] 10.244.2.2:39778 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147498s
	[INFO] 10.244.0.4:45123 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001734195s
	[INFO] 10.244.0.4:47704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197308s
	[INFO] 10.244.0.4:41096 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008016s
	[INFO] 10.244.1.2:33672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016687s
	[INFO] 10.244.1.2:44656 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947816s
	[INFO] 10.244.1.2:34454 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181291s
	[INFO] 10.244.1.2:57821 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001542248s
	[INFO] 10.244.2.2:50572 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014063s
	[INFO] 10.244.2.2:48373 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151728s
	[INFO] 10.244.0.4:34408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010262s
	[INFO] 10.244.0.4:39266 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108924s
	[INFO] 10.244.1.2:55315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217818s
	[INFO] 10.244.1.2:36711 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009775s
	[INFO] 10.244.2.2:41992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00201605s
	[INFO] 10.244.2.2:57037 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156874s
	[INFO] 10.244.2.2:46561 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165147s
	[INFO] 10.244.0.4:54226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009066s
	[INFO] 10.244.0.4:55001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129509s
	[INFO] 10.244.1.2:48297 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160887s
	[INFO] 10.244.1.2:55268 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112442s
	[INFO] 10.244.1.2:45416 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077587s
	
	
	==> describe nodes <==
	Name:               ha-285481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T23_10_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:10:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:17:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-285481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7afae64232d041e98363d899e90f24b0
	  System UUID:                7afae642-32d0-41e9-8363-d899e90f24b0
	  Boot ID:                    ac63bdb2-abe3-40ea-a654-ca3224dec308
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-klvd7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 coredns-5dd5756b68-9c44k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m16s
	  kube-system                 coredns-5dd5756b68-qxtp4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m16s
	  kube-system                 etcd-ha-285481                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m28s
	  kube-system                 kindnet-9fd6f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m16s
	  kube-system                 kube-apiserver-ha-285481             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-controller-manager-ha-285481    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-proxy-cml9m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-scheduler-ha-285481             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m28s
	  kube-system                 kube-vip-ha-285481                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m31s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m14s  kube-proxy       
	  Normal  Starting                 6m28s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m28s  kubelet          Node ha-285481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m28s  kubelet          Node ha-285481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m28s  kubelet          Node ha-285481 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m28s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m17s  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal  NodeReady                6m11s  kubelet          Node ha-285481 status is now: NodeReady
	  Normal  RegisteredNode           4m52s  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal  RegisteredNode           3m39s  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	
	
	Name:               ha-285481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_12_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:11:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:14:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-285481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f269fbf2ace479a8b9438486949ceb1
	  System UUID:                7f269fbf-2ace-479a-8b94-38486949ceb1
	  Boot ID:                    e6c50974-8177-41ea-975f-125b0237e5fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tgxps                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-285481-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m20s
	  kube-system                 kindnet-pnxpk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m21s
	  kube-system                 kube-apiserver-ha-285481-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m20s
	  kube-system                 kube-controller-manager-ha-285481-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-2hcgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m21s
	  kube-system                 kube-scheduler-ha-285481-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-vip-ha-285481-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m11s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m     kube-proxy       
	  Normal  RegisteredNode  4m52s  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode  3m39s  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  NodeNotReady    107s   node-controller  Node ha-285481-m02 status is now: NodeNotReady
	
	
	Name:               ha-285481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_13_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:13:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:17:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-285481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 efeed3fefd6b40f689eaa7f1842dcbc9
	  System UUID:                efeed3fe-fd6b-40f6-89ea-a7f1842dcbc9
	  Boot ID:                    b397d06d-e644-4adf-aa7d-d0201c317777
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc7rx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m31s
	  kube-system                 etcd-ha-285481-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m56s
	  kube-system                 kindnet-zptcr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m56s
	  kube-system                 kube-apiserver-ha-285481-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-ha-285481-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-d2fjd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-scheduler-ha-285481-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m49s
	  kube-system                 kube-vip-ha-285481-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m53s  kube-proxy       
	  Normal  RegisteredNode  3m52s  node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal  RegisteredNode  3m52s  node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal  RegisteredNode  3m39s  node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	
	
	Name:               ha-285481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_14_18_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:17:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-285481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8e447d79a3745579ec32c4638493b56
	  System UUID:                d8e447d7-9a37-4557-9ec3-2c4638493b56
	  Boot ID:                    f361951e-559c-44a5-bf42-821f3dd951a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vzxwb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m50s
	  kube-system                 kube-proxy-sr2rg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m46s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m50s (x5 over 2m51s)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m50s (x5 over 2m51s)  kubelet          Node ha-285481-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m50s (x5 over 2m51s)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m49s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal  RegisteredNode           2m47s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal  NodeReady                2m42s                  kubelet          Node ha-285481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar15 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051507] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040599] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar15 23:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.585346] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.665493] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.660439] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075855] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.154877] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.138534] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.233962] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.832504] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.064584] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.460453] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.636379] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.224199] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.090626] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.490228] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.030141] kauditd_printk_skb: 53 callbacks suppressed
	[Mar15 23:11] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca] <==
	{"level":"warn","ts":"2024-03-15T23:17:07.279082Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.284779Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.286999Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.290905Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.306953Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.315407Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.323055Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.327801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.335279Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.346888Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.353592Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.359907Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.36379Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.367595Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.375393Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.381494Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.385071Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.388381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.393469Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.397719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.410179Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.424057Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.438574Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.485155Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:17:07.489601Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:17:07 up 7 min,  0 users,  load average: 0.27, 0.45, 0.26
	Linux ha-285481 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7] <==
	I0315 23:16:31.891085       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:16:41.899509       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:16:41.899604       1 main.go:227] handling current node
	I0315 23:16:41.899700       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:16:41.899723       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:16:41.899896       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:16:41.899930       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:16:41.900009       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:16:41.900028       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:16:51.915863       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:16:51.916001       1 main.go:227] handling current node
	I0315 23:16:51.916036       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:16:51.916055       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:16:51.916189       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:16:51.916209       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:16:51.916277       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:16:51.916295       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:17:01.928793       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:17:01.929045       1 main.go:227] handling current node
	I0315 23:17:01.929107       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:17:01.929130       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:17:01.929316       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:17:01.929338       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:17:01.929398       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:17:01.929415       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db] <==
	Trace[1235452084]: [5.397257775s] [5.397257775s] END
	I0315 23:12:01.443018       1 trace.go:236] Trace[1775193868]: "Get" accept:application/json, */*,audit-id:b35596a2-86c4-4b9d-91c1-f8df68fb2085,client:192.168.39.23,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (15-Mar-2024 23:11:58.543) (total time: 2899ms):
	Trace[1775193868]: [2.899799978s] [2.899799978s] END
	I0315 23:12:01.444020       1 trace.go:236] Trace[1861158366]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0ed8b92f-1cd7-4980-a385-ba948b361209,client:192.168.39.201,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:54.814) (total time: 6629ms):
	Trace[1861158366]: ---"Write to database call failed" len:2996,err:etcdserver: leader changed 6629ms (23:12:01.443)
	Trace[1861158366]: [6.629950797s] [6.629950797s] END
	E0315 23:12:01.502448       1 controller.go:193] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-roaihxybwiytqfpuxfgxbi337e\": the object has been modified; please apply your changes to the latest version and try again"
	I0315 23:12:01.505824       1 trace.go:236] Trace[1015479745]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:38b1fbb8-c8fa-4faf-b8ec-bd716d940771,client:192.168.39.201,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:55.128) (total time: 6377ms):
	Trace[1015479745]: ["Create etcd3" audit-id:38b1fbb8-c8fa-4faf-b8ec-bd716d940771,key:/events/kube-system/kube-vip-ha-285481-m02.17bd12fc4599efea,type:*core.Event,resource:events 6376ms (23:11:55.129)
	Trace[1015479745]:  ---"Txn call succeeded" 6376ms (23:12:01.505)]
	Trace[1015479745]: [6.377075122s] [6.377075122s] END
	I0315 23:12:01.508373       1 trace.go:236] Trace[1095726586]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:2ea95511-7b5a-437f-a2c4-2d992d4a484d,client:192.168.39.23,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-285481-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (15-Mar-2024 23:12:00.621) (total time: 886ms):
	Trace[1095726586]: ["GuaranteedUpdate etcd3" audit-id:2ea95511-7b5a-437f-a2c4-2d992d4a484d,key:/minions/ha-285481-m02,type:*core.Node,resource:nodes 886ms (23:12:00.621)
	Trace[1095726586]:  ---"Txn call completed" 883ms (23:12:01.506)]
	Trace[1095726586]: ---"About to apply patch" 883ms (23:12:01.506)
	Trace[1095726586]: [886.94658ms] [886.94658ms] END
	I0315 23:12:01.508974       1 trace.go:236] Trace[769205823]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d4cd8845-b3b2-43ec-91f4-85e20e5637e1,client:192.168.39.201,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-285481-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (15-Mar-2024 23:11:56.670) (total time: 4838ms):
	Trace[769205823]: ["GuaranteedUpdate etcd3" audit-id:d4cd8845-b3b2-43ec-91f4-85e20e5637e1,key:/minions/ha-285481-m02,type:*core.Node,resource:nodes 4838ms (23:11:56.670)
	Trace[769205823]:  ---"Txn call completed" 4836ms (23:12:01.508)]
	Trace[769205823]: ---"Object stored in database" 4836ms (23:12:01.508)
	Trace[769205823]: [4.838727567s] [4.838727567s] END
	I0315 23:12:01.566708       1 trace.go:236] Trace[319283670]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a3f07780-b6a4-4adb-96a9-e1b4025cbcf4,client:192.168.39.201,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:56.203) (total time: 5363ms):
	Trace[319283670]: [5.363282306s] [5.363282306s] END
	I0315 23:12:01.568115       1 trace.go:236] Trace[752789856]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e48e3001-80b9-41bd-b231-80be6effb7f4,client:192.168.39.201,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:55.199) (total time: 6368ms):
	Trace[752789856]: [6.368395796s] [6.368395796s] END
	
	
	==> kube-controller-manager [122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079] <==
	I0315 23:13:39.635140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="196.744µs"
	E0315 23:14:16.299719       1 certificate_controller.go:146] Sync csr-dxp76 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dxp76": the object has been modified; please apply your changes to the latest version and try again
	I0315 23:14:17.768214       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-285481-m04\" does not exist"
	I0315 23:14:17.789114       1 range_allocator.go:380] "Set node PodCIDR" node="ha-285481-m04" podCIDRs=["10.244.3.0/24"]
	I0315 23:14:17.825165       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vzxwb"
	I0315 23:14:17.836261       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sr2rg"
	I0315 23:14:17.943996       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-4ch5l"
	I0315 23:14:17.953047       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9lkhd"
	I0315 23:14:18.066329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-q8k8l"
	I0315 23:14:18.142808       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-cmmxh"
	I0315 23:14:20.507194       1 event.go:307] "Event occurred" object="ha-285481-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller"
	I0315 23:14:20.513850       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-285481-m04"
	I0315 23:14:25.501575       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-285481-m04"
	I0315 23:15:20.731259       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-285481-m04"
	I0315 23:15:20.734416       1 event.go:307] "Event occurred" object="ha-285481-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-285481-m02 status is now: NodeNotReady"
	I0315 23:15:20.757734       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.774611       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-tgxps" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.805552       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-2hcgt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.826501       1 event.go:307] "Event occurred" object="kube-system/kindnet-pnxpk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.837819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.450903ms"
	I0315 23:15:20.838054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.757µs"
	I0315 23:15:20.852388       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.872297       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.883895       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.899198       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2] <==
	I0315 23:10:52.335496       1 server_others.go:69] "Using iptables proxy"
	I0315 23:10:52.356464       1 node.go:141] Successfully retrieved node IP: 192.168.39.23
	I0315 23:10:52.452610       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:10:52.452703       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:10:52.455121       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:10:52.455940       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:10:52.456590       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:10:52.456708       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:10:52.458279       1 config.go:188] "Starting service config controller"
	I0315 23:10:52.465236       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:10:52.458585       1 config.go:315] "Starting node config controller"
	I0315 23:10:52.465825       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 23:10:52.461576       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:10:52.465925       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:10:52.565730       1 shared_informer.go:318] Caches are synced for service config
	I0315 23:10:52.566925       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:10:52.567844       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938] <==
	W0315 23:10:37.039340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 23:10:37.039431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 23:10:37.050033       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 23:10:37.050162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 23:10:37.074550       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 23:10:37.075079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0315 23:10:40.089781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 23:13:11.599531       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-psf5b\": pod kindnet-psf5b is already assigned to node \"ha-285481-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-psf5b" node="ha-285481-m03"
	E0315 23:13:11.600127       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 47ed4790-7753-4041-bcf7-d384de226727(kube-system/kindnet-psf5b) wasn't assumed so cannot be forgotten"
	E0315 23:13:11.600419       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-psf5b\": pod kindnet-psf5b is already assigned to node \"ha-285481-m03\"" pod="kube-system/kindnet-psf5b"
	I0315 23:13:11.600807       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-psf5b" node="ha-285481-m03"
	E0315 23:14:17.875483       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sr2rg\": pod kube-proxy-sr2rg is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sr2rg" node="ha-285481-m04"
	E0315 23:14:17.876096       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 104a5e4c-e568-4936-904d-e82b59620b8b(kube-system/kube-proxy-sr2rg) wasn't assumed so cannot be forgotten"
	E0315 23:14:17.876829       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sr2rg\": pod kube-proxy-sr2rg is already assigned to node \"ha-285481-m04\"" pod="kube-system/kube-proxy-sr2rg"
	I0315 23:14:17.877084       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sr2rg" node="ha-285481-m04"
	E0315 23:14:17.902915       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4ch5l\": pod kindnet-4ch5l is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4ch5l" node="ha-285481-m04"
	E0315 23:14:17.903231       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4ch5l\": pod kindnet-4ch5l is already assigned to node \"ha-285481-m04\"" pod="kube-system/kindnet-4ch5l"
	E0315 23:14:18.042227       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q8k8l\": pod kube-proxy-q8k8l is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q8k8l" node="ha-285481-m04"
	E0315 23:14:18.042314       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 3385d524-1d32-4a17-be74-cc7e4ec17cf6(kube-system/kube-proxy-q8k8l) wasn't assumed so cannot be forgotten"
	E0315 23:14:18.042360       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q8k8l\": pod kube-proxy-q8k8l is already assigned to node \"ha-285481-m04\"" pod="kube-system/kube-proxy-q8k8l"
	I0315 23:14:18.042416       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q8k8l" node="ha-285481-m04"
	E0315 23:14:18.042955       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cmmxh\": pod kindnet-cmmxh is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cmmxh" node="ha-285481-m04"
	E0315 23:14:18.043026       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7c6c2048-28ef-49fc-909a-aad75912b3b1(kube-system/kindnet-cmmxh) wasn't assumed so cannot be forgotten"
	E0315 23:14:18.043052       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cmmxh\": pod kindnet-cmmxh is already assigned to node \"ha-285481-m04\"" pod="kube-system/kindnet-cmmxh"
	I0315 23:14:18.043082       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cmmxh" node="ha-285481-m04"
	
	
	==> kubelet <==
	Mar 15 23:12:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:12:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:12:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:13:36 ha-285481 kubelet[1384]: I0315 23:13:36.567780    1384 topology_manager.go:215] "Topology Admit Handler" podUID="fce71bb2-0072-40ff-88b2-fa91d9ca758f" podNamespace="default" podName="busybox-5b5d89c9d6-klvd7"
	Mar 15 23:13:36 ha-285481 kubelet[1384]: I0315 23:13:36.754972    1384 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8fr\" (UniqueName: \"kubernetes.io/projected/fce71bb2-0072-40ff-88b2-fa91d9ca758f-kube-api-access-2f8fr\") pod \"busybox-5b5d89c9d6-klvd7\" (UID: \"fce71bb2-0072-40ff-88b2-fa91d9ca758f\") " pod="default/busybox-5b5d89c9d6-klvd7"
	Mar 15 23:13:39 ha-285481 kubelet[1384]: E0315 23:13:39.425986    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:13:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:13:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:13:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:13:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:14:39 ha-285481 kubelet[1384]: E0315 23:14:39.424262    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:14:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:14:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:14:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:14:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:15:39 ha-285481 kubelet[1384]: E0315 23:15:39.425764    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:15:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:15:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:15:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:15:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:16:39 ha-285481 kubelet[1384]: E0315 23:16:39.420145    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:16:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:16:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:16:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:16:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-285481 -n ha-285481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-285481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (142.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (50.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (3.190605791s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:12.133923   96454 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:12.134087   96454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:12.134104   96454 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:12.134111   96454 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:12.134330   96454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:12.134539   96454 out.go:298] Setting JSON to false
	I0315 23:17:12.134580   96454 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:12.134677   96454 notify.go:220] Checking for updates...
	I0315 23:17:12.135150   96454 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:12.135172   96454 status.go:255] checking status of ha-285481 ...
	I0315 23:17:12.135723   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:12.135802   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:12.155246   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43647
	I0315 23:17:12.155741   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:12.156382   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:12.156403   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:12.156736   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:12.156926   96454 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:12.158389   96454 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:12.158408   96454 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:12.158667   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:12.158723   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:12.174438   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40749
	I0315 23:17:12.174916   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:12.175418   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:12.175447   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:12.175804   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:12.176006   96454 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:12.178808   96454 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:12.179204   96454 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:12.179229   96454 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:12.179349   96454 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:12.179640   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:12.179688   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:12.194385   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33929
	I0315 23:17:12.194829   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:12.195277   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:12.195301   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:12.195672   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:12.195826   96454 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:12.196089   96454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:12.196114   96454 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:12.198909   96454 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:12.199459   96454 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:12.199487   96454 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:12.199634   96454 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:12.199975   96454 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:12.200140   96454 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:12.200307   96454 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:12.280930   96454 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:12.288258   96454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:12.305245   96454 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:12.305273   96454 api_server.go:166] Checking apiserver status ...
	I0315 23:17:12.305303   96454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:12.321029   96454 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:12.332048   96454 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:12.332102   96454 ssh_runner.go:195] Run: ls
	I0315 23:17:12.336413   96454 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:12.342530   96454 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:12.342555   96454 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:12.342569   96454 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:12.342584   96454 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:12.342866   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:12.342909   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:12.357661   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34883
	I0315 23:17:12.358151   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:12.358571   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:12.358601   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:12.358945   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:12.359116   96454 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:12.360685   96454 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:17:12.360717   96454 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:12.361087   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:12.361139   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:12.375667   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39513
	I0315 23:17:12.376171   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:12.376697   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:12.376727   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:12.377068   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:12.377299   96454 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:17:12.380264   96454 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:12.380706   96454 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:12.380738   96454 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:12.380824   96454 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:12.381134   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:12.381177   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:12.396258   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34135
	I0315 23:17:12.397036   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:12.398362   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:12.398388   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:12.398733   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:12.398932   96454 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:17:12.399149   96454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:12.399173   96454 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:17:12.402169   96454 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:12.402663   96454 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:12.402691   96454 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:12.402859   96454 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:17:12.403013   96454 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:17:12.403177   96454 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:17:12.403363   96454 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:14.907657   96454 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:14.907776   96454 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:14.907801   96454 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:14.907809   96454 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:14.907829   96454 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:14.907840   96454 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:14.908285   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:14.908348   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:14.924047   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33603
	I0315 23:17:14.924599   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:14.925158   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:14.925187   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:14.925500   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:14.925678   96454 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:14.927315   96454 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:14.927356   96454 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:14.927639   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:14.927675   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:14.942287   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37053
	I0315 23:17:14.942716   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:14.943192   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:14.943221   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:14.943578   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:14.943752   96454 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:14.946730   96454 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:14.947108   96454 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:14.947135   96454 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:14.947254   96454 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:14.947599   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:14.947641   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:14.961871   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34411
	I0315 23:17:14.962363   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:14.962856   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:14.962881   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:14.963175   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:14.963403   96454 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:14.963599   96454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:14.963622   96454 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:14.966634   96454 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:14.967122   96454 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:14.967144   96454 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:14.967299   96454 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:14.967479   96454 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:14.967663   96454 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:14.967825   96454 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:15.050994   96454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:15.067570   96454 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:15.067603   96454 api_server.go:166] Checking apiserver status ...
	I0315 23:17:15.067643   96454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:15.081158   96454 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:15.091441   96454 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:15.091492   96454 ssh_runner.go:195] Run: ls
	I0315 23:17:15.096962   96454 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:15.101742   96454 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:15.101770   96454 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:15.101782   96454 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:15.101803   96454 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:15.102191   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:15.102230   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:15.117341   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43761
	I0315 23:17:15.117873   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:15.118441   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:15.118472   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:15.118794   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:15.118995   96454 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:15.120693   96454 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:15.120713   96454 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:15.121072   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:15.121116   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:15.136800   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45483
	I0315 23:17:15.137386   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:15.137888   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:15.137911   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:15.138237   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:15.138487   96454 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:15.141108   96454 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:15.141487   96454 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:15.141517   96454 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:15.141676   96454 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:15.141989   96454 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:15.142026   96454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:15.157796   96454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39099
	I0315 23:17:15.158194   96454 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:15.158667   96454 main.go:141] libmachine: Using API Version  1
	I0315 23:17:15.158691   96454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:15.159053   96454 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:15.159250   96454 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:15.159491   96454 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:15.159517   96454 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:15.162779   96454 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:15.163219   96454 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:15.163241   96454 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:15.163414   96454 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:15.163587   96454 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:15.163764   96454 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:15.163889   96454 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:15.250920   96454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:15.267729   96454 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (5.083365874s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:16.384018   96549 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:16.384269   96549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:16.384280   96549 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:16.384284   96549 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:16.384471   96549 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:16.384646   96549 out.go:298] Setting JSON to false
	I0315 23:17:16.384686   96549 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:16.384807   96549 notify.go:220] Checking for updates...
	I0315 23:17:16.385090   96549 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:16.385109   96549 status.go:255] checking status of ha-285481 ...
	I0315 23:17:16.385483   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:16.385537   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:16.401172   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42009
	I0315 23:17:16.401697   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:16.402316   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:16.402350   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:16.402834   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:16.403062   96549 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:16.404933   96549 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:16.404951   96549 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:16.405233   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:16.405282   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:16.419974   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41863
	I0315 23:17:16.420353   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:16.420848   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:16.420869   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:16.421179   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:16.421370   96549 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:16.424120   96549 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:16.424510   96549 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:16.424547   96549 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:16.424634   96549 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:16.424924   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:16.424966   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:16.439936   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0315 23:17:16.440338   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:16.440802   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:16.440853   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:16.441132   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:16.441286   96549 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:16.441461   96549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:16.441481   96549 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:16.443916   96549 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:16.444297   96549 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:16.444320   96549 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:16.444449   96549 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:16.444655   96549 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:16.444790   96549 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:16.444946   96549 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:16.523690   96549 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:16.532683   96549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:16.550457   96549 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:16.550485   96549 api_server.go:166] Checking apiserver status ...
	I0315 23:17:16.550520   96549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:16.568571   96549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:16.580354   96549 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:16.580422   96549 ssh_runner.go:195] Run: ls
	I0315 23:17:16.585162   96549 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:16.589563   96549 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:16.589589   96549 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:16.589602   96549 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:16.589624   96549 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:16.590047   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:16.590112   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:16.605251   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34383
	I0315 23:17:16.605656   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:16.606212   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:16.606241   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:16.606534   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:16.606744   96549 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:16.608405   96549 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:17:16.608426   96549 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:16.608846   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:16.608895   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:16.625221   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39923
	I0315 23:17:16.625686   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:16.626177   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:16.626199   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:16.626493   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:16.626657   96549 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:17:16.629631   96549 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:16.630128   96549 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:16.630151   96549 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:16.630325   96549 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:16.630627   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:16.630664   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:16.648195   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38787
	I0315 23:17:16.648635   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:16.649110   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:16.649133   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:16.649512   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:16.649739   96549 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:17:16.649940   96549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:16.649964   96549 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:17:16.653082   96549 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:16.653558   96549 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:16.653588   96549 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:16.653811   96549 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:17:16.653979   96549 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:17:16.654152   96549 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:17:16.654296   96549 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:17.979681   96549 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:17.979729   96549 retry.go:31] will retry after 190.297074ms: dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:21.055630   96549 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:21.055732   96549 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:21.055757   96549 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:21.055768   96549 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:21.055818   96549 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:21.055832   96549 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:21.056237   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:21.056287   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:21.071002   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35425
	I0315 23:17:21.071409   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:21.071855   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:21.071878   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:21.072271   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:21.072505   96549 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:21.073982   96549 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:21.074002   96549 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:21.074299   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:21.074333   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:21.088409   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I0315 23:17:21.088805   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:21.089261   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:21.089280   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:21.089571   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:21.089775   96549 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:21.092589   96549 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:21.093056   96549 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:21.093083   96549 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:21.093193   96549 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:21.093591   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:21.093636   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:21.107917   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37863
	I0315 23:17:21.108297   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:21.108758   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:21.108780   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:21.109151   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:21.109333   96549 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:21.109524   96549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:21.109549   96549 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:21.112098   96549 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:21.112473   96549 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:21.112501   96549 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:21.112633   96549 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:21.112815   96549 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:21.112954   96549 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:21.113100   96549 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:21.195384   96549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:21.212197   96549 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:21.212225   96549 api_server.go:166] Checking apiserver status ...
	I0315 23:17:21.212265   96549 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:21.225851   96549 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:21.236306   96549 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:21.236369   96549 ssh_runner.go:195] Run: ls
	I0315 23:17:21.240933   96549 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:21.247132   96549 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:21.247152   96549 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:21.247160   96549 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:21.247174   96549 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:21.247494   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:21.247533   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:21.262583   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38489
	I0315 23:17:21.262995   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:21.263548   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:21.263569   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:21.263949   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:21.264160   96549 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:21.265474   96549 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:21.265494   96549 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:21.265899   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:21.265947   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:21.279926   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42295
	I0315 23:17:21.280363   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:21.280794   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:21.280812   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:21.281121   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:21.281293   96549 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:21.283605   96549 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:21.284056   96549 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:21.284087   96549 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:21.284228   96549 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:21.284498   96549 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:21.284529   96549 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:21.298407   96549 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0315 23:17:21.298797   96549 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:21.299284   96549 main.go:141] libmachine: Using API Version  1
	I0315 23:17:21.299305   96549 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:21.299614   96549 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:21.299819   96549 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:21.300019   96549 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:21.300038   96549 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:21.303014   96549 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:21.303484   96549 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:21.303517   96549 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:21.303627   96549 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:21.303802   96549 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:21.304030   96549 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:21.304168   96549 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:21.394833   96549 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:21.409970   96549 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (4.96494322s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:22.944478   96645 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:22.944591   96645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:22.944606   96645 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:22.944609   96645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:22.944797   96645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:22.944969   96645 out.go:298] Setting JSON to false
	I0315 23:17:22.945007   96645 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:22.945110   96645 notify.go:220] Checking for updates...
	I0315 23:17:22.945359   96645 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:22.945371   96645 status.go:255] checking status of ha-285481 ...
	I0315 23:17:22.945785   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:22.945844   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:22.961918   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34229
	I0315 23:17:22.962349   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:22.962881   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:22.962910   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:22.963268   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:22.963491   96645 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:22.965014   96645 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:22.965029   96645 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:22.965283   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:22.965317   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:22.979939   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41113
	I0315 23:17:22.980322   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:22.980864   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:22.980885   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:22.981276   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:22.981483   96645 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:22.984387   96645 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:22.984819   96645 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:22.984873   96645 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:22.984969   96645 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:22.985339   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:22.985375   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:23.000713   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38595
	I0315 23:17:23.001123   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:23.001603   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:23.001631   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:23.002038   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:23.002290   96645 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:23.002521   96645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:23.002550   96645 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:23.005371   96645 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:23.005801   96645 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:23.005838   96645 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:23.005989   96645 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:23.006147   96645 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:23.006300   96645 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:23.006420   96645 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:23.083073   96645 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:23.089589   96645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:23.106089   96645 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:23.106115   96645 api_server.go:166] Checking apiserver status ...
	I0315 23:17:23.106183   96645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:23.124560   96645 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:23.138864   96645 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:23.138928   96645 ssh_runner.go:195] Run: ls
	I0315 23:17:23.144408   96645 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:23.149529   96645 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:23.149560   96645 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:23.149591   96645 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:23.149621   96645 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:23.150013   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:23.150056   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:23.164820   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33547
	I0315 23:17:23.165279   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:23.165774   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:23.165796   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:23.166126   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:23.166334   96645 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:23.168051   96645 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:17:23.168073   96645 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:23.168467   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:23.168510   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:23.183095   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37635
	I0315 23:17:23.183647   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:23.184192   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:23.184225   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:23.184580   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:23.184769   96645 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:17:23.187773   96645 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:23.188242   96645 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:23.188278   96645 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:23.188367   96645 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:23.188698   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:23.188740   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:23.203759   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37307
	I0315 23:17:23.204243   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:23.204704   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:23.204729   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:23.205069   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:23.205242   96645 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:17:23.205403   96645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:23.205430   96645 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:17:23.208006   96645 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:23.208366   96645 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:23.208387   96645 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:23.208553   96645 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:17:23.208729   96645 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:17:23.208886   96645 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:17:23.208980   96645 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:24.127585   96645 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:24.127641   96645 retry.go:31] will retry after 292.232688ms: dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:27.483600   96645 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:27.483687   96645 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:27.483702   96645 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:27.483709   96645 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:27.483745   96645 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:27.483753   96645 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:27.484081   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:27.484122   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:27.499814   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33457
	I0315 23:17:27.500327   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:27.500862   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:27.500894   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:27.501315   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:27.501599   96645 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:27.503377   96645 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:27.503420   96645 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:27.503815   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:27.503873   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:27.517766   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35013
	I0315 23:17:27.518173   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:27.518545   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:27.518562   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:27.518827   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:27.519016   96645 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:27.521706   96645 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:27.522136   96645 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:27.522161   96645 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:27.522310   96645 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:27.522613   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:27.522652   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:27.536561   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39255
	I0315 23:17:27.536955   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:27.537385   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:27.537409   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:27.537733   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:27.537911   96645 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:27.538134   96645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:27.538155   96645 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:27.540796   96645 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:27.541224   96645 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:27.541259   96645 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:27.541428   96645 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:27.541628   96645 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:27.541811   96645 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:27.541944   96645 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:27.624004   96645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:27.643723   96645 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:27.643757   96645 api_server.go:166] Checking apiserver status ...
	I0315 23:17:27.643799   96645 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:27.660265   96645 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:27.675813   96645 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:27.675869   96645 ssh_runner.go:195] Run: ls
	I0315 23:17:27.684273   96645 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:27.690674   96645 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:27.690698   96645 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:27.690707   96645 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:27.690723   96645 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:27.691009   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:27.691049   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:27.705935   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44763
	I0315 23:17:27.706379   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:27.706924   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:27.706948   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:27.707239   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:27.707430   96645 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:27.708807   96645 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:27.708825   96645 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:27.709122   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:27.709171   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:27.723316   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38793
	I0315 23:17:27.723749   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:27.724236   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:27.724261   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:27.724544   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:27.724748   96645 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:27.727246   96645 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:27.727662   96645 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:27.727687   96645 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:27.727839   96645 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:27.728137   96645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:27.728171   96645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:27.742017   96645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0315 23:17:27.742402   96645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:27.742814   96645 main.go:141] libmachine: Using API Version  1
	I0315 23:17:27.742841   96645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:27.743179   96645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:27.743395   96645 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:27.743579   96645 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:27.743602   96645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:27.746533   96645 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:27.746982   96645 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:27.747021   96645 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:27.747173   96645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:27.747375   96645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:27.747528   96645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:27.747687   96645 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:27.834448   96645 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:27.850010   96645 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (4.56718166s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:29.796434   96740 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:29.796593   96740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:29.796606   96740 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:29.796612   96740 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:29.796821   96740 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:29.797011   96740 out.go:298] Setting JSON to false
	I0315 23:17:29.797060   96740 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:29.797156   96740 notify.go:220] Checking for updates...
	I0315 23:17:29.797491   96740 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:29.797510   96740 status.go:255] checking status of ha-285481 ...
	I0315 23:17:29.797940   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:29.798012   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:29.815226   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37855
	I0315 23:17:29.815682   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:29.816232   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:29.816259   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:29.816666   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:29.816864   96740 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:29.818655   96740 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:29.818673   96740 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:29.818957   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:29.818991   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:29.834591   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34617
	I0315 23:17:29.834969   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:29.835476   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:29.835502   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:29.835816   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:29.836013   96740 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:29.838877   96740 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:29.839360   96740 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:29.839392   96740 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:29.839513   96740 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:29.839781   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:29.839822   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:29.854793   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39819
	I0315 23:17:29.855177   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:29.855647   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:29.855670   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:29.855932   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:29.856077   96740 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:29.856245   96740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:29.856270   96740 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:29.858636   96740 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:29.858989   96740 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:29.859013   96740 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:29.859175   96740 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:29.859379   96740 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:29.859547   96740 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:29.859685   96740 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:29.939432   96740 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:29.946234   96740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:29.963129   96740 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:29.963165   96740 api_server.go:166] Checking apiserver status ...
	I0315 23:17:29.963211   96740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:29.980379   96740 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:29.990431   96740 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:29.990498   96740 ssh_runner.go:195] Run: ls
	I0315 23:17:29.995669   96740 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:30.001833   96740 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:30.001856   96740 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:30.001867   96740 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:30.001883   96740 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:30.002183   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:30.002220   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:30.017396   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43795
	I0315 23:17:30.017883   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:30.018416   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:30.018445   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:30.018813   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:30.019041   96740 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:30.020790   96740 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:17:30.020812   96740 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:30.021084   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:30.021119   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:30.036492   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42425
	I0315 23:17:30.037193   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:30.037884   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:30.037934   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:30.038320   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:30.038481   96740 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:17:30.041194   96740 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:30.041569   96740 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:30.041599   96740 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:30.041715   96740 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:30.042004   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:30.042047   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:30.059560   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46651
	I0315 23:17:30.060005   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:30.060516   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:30.060537   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:30.060912   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:30.061093   96740 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:17:30.061299   96740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:30.061324   96740 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:17:30.064058   96740 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:30.064492   96740 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:30.064514   96740 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:30.064640   96740 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:17:30.064797   96740 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:17:30.064943   96740 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:17:30.065075   96740 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:30.555510   96740 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:30.555568   96740 retry.go:31] will retry after 327.440776ms: dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:33.947562   96740 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:33.947657   96740 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:33.947673   96740 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:33.947683   96740 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:33.947718   96740 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:33.947732   96740 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:33.948078   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:33.948133   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:33.962970   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39723
	I0315 23:17:33.963416   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:33.963862   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:33.963886   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:33.964272   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:33.964493   96740 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:33.966103   96740 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:33.966121   96740 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:33.966396   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:33.966430   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:33.981441   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0315 23:17:33.981891   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:33.982424   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:33.982446   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:33.982729   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:33.982934   96740 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:33.985737   96740 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:33.986233   96740 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:33.986265   96740 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:33.986383   96740 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:33.986694   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:33.986730   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:34.002076   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37555
	I0315 23:17:34.002551   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:34.003023   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:34.003052   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:34.003403   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:34.003634   96740 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:34.003867   96740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:34.003899   96740 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:34.006845   96740 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:34.007232   96740 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:34.007262   96740 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:34.007445   96740 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:34.007598   96740 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:34.007705   96740 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:34.007793   96740 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:34.087556   96740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:34.103874   96740 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:34.103901   96740 api_server.go:166] Checking apiserver status ...
	I0315 23:17:34.103933   96740 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:34.121212   96740 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:34.133497   96740 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:34.133553   96740 ssh_runner.go:195] Run: ls
	I0315 23:17:34.138711   96740 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:34.143405   96740 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:34.143438   96740 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:34.143450   96740 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:34.143472   96740 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:34.143843   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:34.143893   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:34.158872   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45587
	I0315 23:17:34.159311   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:34.159848   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:34.159879   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:34.160276   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:34.160482   96740 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:34.162397   96740 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:34.162416   96740 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:34.162723   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:34.162773   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:34.177347   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38411
	I0315 23:17:34.177856   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:34.178444   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:34.178467   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:34.178828   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:34.179052   96740 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:34.182237   96740 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:34.182754   96740 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:34.182794   96740 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:34.182934   96740 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:34.183262   96740 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:34.183302   96740 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:34.197854   96740 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41061
	I0315 23:17:34.198350   96740 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:34.198868   96740 main.go:141] libmachine: Using API Version  1
	I0315 23:17:34.198895   96740 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:34.199207   96740 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:34.199426   96740 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:34.199639   96740 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:34.199667   96740 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:34.202657   96740 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:34.203216   96740 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:34.203249   96740 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:34.203438   96740 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:34.203589   96740 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:34.203750   96740 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:34.203924   96740 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:34.290831   96740 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:34.306798   96740 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (3.764548369s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:36.984195   96850 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:36.984319   96850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:36.984328   96850 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:36.984332   96850 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:36.984529   96850 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:36.984703   96850 out.go:298] Setting JSON to false
	I0315 23:17:36.984742   96850 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:36.984868   96850 notify.go:220] Checking for updates...
	I0315 23:17:36.985183   96850 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:36.985201   96850 status.go:255] checking status of ha-285481 ...
	I0315 23:17:36.985597   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:36.985657   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:37.001788   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45899
	I0315 23:17:37.002405   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:37.003183   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:37.003217   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:37.003710   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:37.004292   96850 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:37.006062   96850 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:37.006087   96850 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:37.006436   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:37.006482   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:37.021506   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36045
	I0315 23:17:37.021942   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:37.022391   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:37.022419   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:37.022744   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:37.022966   96850 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:37.026018   96850 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:37.026399   96850 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:37.026436   96850 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:37.026565   96850 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:37.026870   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:37.026920   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:37.041732   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42727
	I0315 23:17:37.042090   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:37.042526   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:37.042545   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:37.042860   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:37.043104   96850 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:37.043311   96850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:37.043372   96850 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:37.046244   96850 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:37.046733   96850 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:37.046760   96850 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:37.046937   96850 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:37.047109   96850 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:37.047282   96850 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:37.047440   96850 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:37.131445   96850 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:37.137794   96850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:37.155692   96850 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:37.155721   96850 api_server.go:166] Checking apiserver status ...
	I0315 23:17:37.155757   96850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:37.173136   96850 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:37.184479   96850 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:37.184549   96850 ssh_runner.go:195] Run: ls
	I0315 23:17:37.189374   96850 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:37.196678   96850 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:37.196711   96850 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:37.196721   96850 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:37.196739   96850 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:37.197090   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:37.197138   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:37.214389   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42705
	I0315 23:17:37.215001   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:37.215591   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:37.215618   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:37.216033   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:37.216232   96850 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:37.217767   96850 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:17:37.217786   96850 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:37.218099   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:37.218153   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:37.235977   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33681
	I0315 23:17:37.236403   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:37.237018   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:37.237056   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:37.237361   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:37.237558   96850 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:17:37.240658   96850 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:37.241124   96850 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:37.241149   96850 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:37.241235   96850 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:37.241540   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:37.241584   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:37.256478   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37851
	I0315 23:17:37.256877   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:37.257354   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:37.257379   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:37.257700   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:37.257892   96850 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:17:37.258091   96850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:37.258135   96850 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:17:37.260921   96850 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:37.261418   96850 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:37.261446   96850 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:37.261617   96850 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:17:37.261770   96850 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:17:37.261899   96850 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:17:37.262054   96850 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:40.315595   96850 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:40.315720   96850 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:40.315744   96850 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:40.315760   96850 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:40.315786   96850 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:40.315797   96850 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:40.316247   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:40.316298   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:40.331414   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34239
	I0315 23:17:40.331899   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:40.332319   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:40.332342   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:40.332716   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:40.332895   96850 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:40.334416   96850 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:40.334435   96850 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:40.334745   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:40.334793   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:40.349388   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38013
	I0315 23:17:40.349860   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:40.350317   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:40.350341   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:40.350728   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:40.350939   96850 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:40.354260   96850 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:40.354793   96850 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:40.354822   96850 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:40.354959   96850 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:40.355272   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:40.355307   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:40.371171   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43191
	I0315 23:17:40.371596   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:40.372125   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:40.372151   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:40.372456   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:40.372675   96850 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:40.372871   96850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:40.372897   96850 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:40.375832   96850 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:40.376209   96850 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:40.376246   96850 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:40.376417   96850 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:40.376582   96850 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:40.376727   96850 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:40.376872   96850 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:40.464479   96850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:40.480095   96850 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:40.480125   96850 api_server.go:166] Checking apiserver status ...
	I0315 23:17:40.480161   96850 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:40.495024   96850 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:40.505148   96850 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:40.505200   96850 ssh_runner.go:195] Run: ls
	I0315 23:17:40.518726   96850 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:40.526184   96850 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:40.526210   96850 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:40.526222   96850 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:40.526246   96850 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:40.526544   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:40.526588   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:40.541623   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0315 23:17:40.542009   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:40.542466   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:40.542487   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:40.542808   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:40.542989   96850 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:40.544777   96850 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:40.544796   96850 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:40.545203   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:40.545245   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:40.560524   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39379
	I0315 23:17:40.560935   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:40.561391   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:40.561408   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:40.561725   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:40.561940   96850 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:40.564935   96850 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:40.565384   96850 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:40.565419   96850 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:40.565544   96850 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:40.565838   96850 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:40.565883   96850 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:40.580389   96850 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43019
	I0315 23:17:40.580779   96850 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:40.581171   96850 main.go:141] libmachine: Using API Version  1
	I0315 23:17:40.581190   96850 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:40.581499   96850 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:40.581702   96850 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:40.581887   96850 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:40.581907   96850 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:40.584415   96850 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:40.584869   96850 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:40.584898   96850 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:40.585028   96850 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:40.585174   96850 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:40.585324   96850 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:40.585449   96850 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:40.671846   96850 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:40.690790   96850 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (3.750125116s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:46.625591   97408 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:46.625719   97408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:46.625729   97408 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:46.625741   97408 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:46.625953   97408 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:46.626173   97408 out.go:298] Setting JSON to false
	I0315 23:17:46.626214   97408 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:46.626260   97408 notify.go:220] Checking for updates...
	I0315 23:17:46.626582   97408 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:46.626596   97408 status.go:255] checking status of ha-285481 ...
	I0315 23:17:46.626971   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:46.627037   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:46.645336   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46837
	I0315 23:17:46.645734   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:46.646283   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:46.646305   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:46.646607   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:46.646819   97408 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:46.648586   97408 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:46.648602   97408 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:46.648984   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:46.649030   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:46.664062   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36281
	I0315 23:17:46.664512   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:46.664950   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:46.664971   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:46.665299   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:46.665474   97408 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:46.667833   97408 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:46.668229   97408 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:46.668256   97408 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:46.668382   97408 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:46.668651   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:46.668687   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:46.683016   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46011
	I0315 23:17:46.683419   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:46.683872   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:46.683892   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:46.684210   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:46.684435   97408 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:46.684661   97408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:46.684691   97408 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:46.687712   97408 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:46.688159   97408 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:46.688196   97408 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:46.688326   97408 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:46.688499   97408 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:46.688624   97408 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:46.688756   97408 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:46.772567   97408 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:46.778928   97408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:46.795378   97408 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:46.795411   97408 api_server.go:166] Checking apiserver status ...
	I0315 23:17:46.795456   97408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:46.812972   97408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:46.824100   97408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:46.824171   97408 ssh_runner.go:195] Run: ls
	I0315 23:17:46.829244   97408 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:46.836606   97408 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:46.836632   97408 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:46.836642   97408 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:46.836666   97408 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:46.836979   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:46.837022   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:46.852557   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40879
	I0315 23:17:46.853119   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:46.853646   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:46.853678   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:46.854046   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:46.854281   97408 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:46.855843   97408 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:17:46.855859   97408 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:46.856165   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:46.856216   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:46.871555   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33143
	I0315 23:17:46.872029   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:46.872488   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:46.872515   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:46.872820   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:46.873045   97408 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:17:46.876135   97408 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:46.876598   97408 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:46.876633   97408 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:46.876734   97408 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:17:46.877024   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:46.877061   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:46.892263   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34311
	I0315 23:17:46.892745   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:46.893229   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:46.893257   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:46.893640   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:46.893871   97408 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:17:46.894083   97408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:46.894107   97408 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:17:46.896896   97408 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:46.897320   97408 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:17:46.897341   97408 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:17:46.897470   97408 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:17:46.897636   97408 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:17:46.897781   97408 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:17:46.897895   97408 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	W0315 23:17:49.947596   97408 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.201:22: connect: no route to host
	W0315 23:17:49.947685   97408 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	E0315 23:17:49.947698   97408 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:49.947709   97408 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:17:49.947733   97408 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.201:22: connect: no route to host
	I0315 23:17:49.947740   97408 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:49.948085   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:49.948128   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:49.963439   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33703
	I0315 23:17:49.963863   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:49.964413   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:49.964437   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:49.964782   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:49.964988   97408 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:17:49.966718   97408 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:17:49.966735   97408 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:49.967051   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:49.967091   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:49.982268   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35553
	I0315 23:17:49.982763   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:49.983262   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:49.983300   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:49.983684   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:49.983882   97408 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:17:49.987478   97408 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:49.987894   97408 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:49.987920   97408 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:49.988089   97408 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:17:49.988416   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:49.988464   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:50.003345   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41683
	I0315 23:17:50.003812   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:50.004369   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:50.004391   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:50.004794   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:50.005017   97408 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:17:50.005200   97408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:50.005227   97408 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:17:50.008063   97408 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:50.008535   97408 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:17:50.008558   97408 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:17:50.008734   97408 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:17:50.008940   97408 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:17:50.009110   97408 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:17:50.009262   97408 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:17:50.092035   97408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:50.110688   97408 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:50.110727   97408 api_server.go:166] Checking apiserver status ...
	I0315 23:17:50.110772   97408 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:50.128964   97408 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:17:50.143517   97408 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:50.143593   97408 ssh_runner.go:195] Run: ls
	I0315 23:17:50.148794   97408 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:50.153900   97408 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:50.153925   97408 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:17:50.153935   97408 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:50.153951   97408 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:17:50.154285   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:50.154328   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:50.169501   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34237
	I0315 23:17:50.169913   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:50.170514   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:50.170542   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:50.170889   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:50.171126   97408 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:17:50.172947   97408 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:17:50.172965   97408 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:50.173250   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:50.173291   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:50.188285   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44691
	I0315 23:17:50.188792   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:50.189434   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:50.189470   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:50.189804   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:50.190046   97408 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:17:50.193039   97408 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:50.193559   97408 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:50.193598   97408 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:50.193763   97408 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:17:50.194188   97408 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:50.194234   97408 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:50.209630   97408 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42445
	I0315 23:17:50.210090   97408 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:50.210593   97408 main.go:141] libmachine: Using API Version  1
	I0315 23:17:50.210620   97408 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:50.210906   97408 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:50.211113   97408 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:17:50.211306   97408 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:50.211337   97408 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:17:50.213855   97408 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:50.214285   97408 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:17:50.214310   97408 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:17:50.214448   97408 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:17:50.214632   97408 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:17:50.214773   97408 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:17:50.214915   97408 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:17:50.299165   97408 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:50.314987   97408 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 7 (657.753155ms)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-285481-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:17:59.765448   97539 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:17:59.765754   97539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:59.765767   97539 out.go:304] Setting ErrFile to fd 2...
	I0315 23:17:59.765772   97539 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:17:59.766064   97539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:17:59.766281   97539 out.go:298] Setting JSON to false
	I0315 23:17:59.766327   97539 mustload.go:65] Loading cluster: ha-285481
	I0315 23:17:59.766461   97539 notify.go:220] Checking for updates...
	I0315 23:17:59.767675   97539 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:17:59.767714   97539 status.go:255] checking status of ha-285481 ...
	I0315 23:17:59.768890   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:59.768932   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:59.790484   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33145
	I0315 23:17:59.791051   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:59.791688   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:17:59.791714   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:59.792147   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:59.792373   97539 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:17:59.793869   97539 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:17:59.793886   97539 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:59.794149   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:59.794183   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:59.810346   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41083
	I0315 23:17:59.810750   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:59.811281   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:17:59.811300   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:59.811743   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:59.811935   97539 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:17:59.814483   97539 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:59.814963   97539 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:59.814998   97539 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:59.815183   97539 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:17:59.815497   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:59.815532   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:59.829542   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44649
	I0315 23:17:59.829983   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:59.830512   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:17:59.830536   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:59.830911   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:59.831145   97539 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:17:59.831371   97539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:17:59.831398   97539 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:17:59.834076   97539 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:59.834545   97539 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:17:59.834580   97539 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:17:59.834677   97539 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:17:59.834832   97539 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:17:59.834992   97539 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:17:59.835148   97539 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:17:59.915516   97539 ssh_runner.go:195] Run: systemctl --version
	I0315 23:17:59.922023   97539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:17:59.939394   97539 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:17:59.939424   97539 api_server.go:166] Checking apiserver status ...
	I0315 23:17:59.939458   97539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:17:59.954235   97539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup
	W0315 23:17:59.964487   97539 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1254/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:17:59.964545   97539 ssh_runner.go:195] Run: ls
	I0315 23:17:59.968974   97539 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:17:59.978723   97539 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:17:59.978750   97539 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:17:59.978763   97539 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:59.978789   97539 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:17:59.979267   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:59.979334   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:17:59.994812   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38209
	I0315 23:17:59.995302   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:17:59.995759   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:17:59.995782   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:17:59.996165   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:17:59.996403   97539 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:17:59.997930   97539 status.go:330] ha-285481-m02 host status = "Stopped" (err=<nil>)
	I0315 23:17:59.997949   97539 status.go:343] host is not running, skipping remaining checks
	I0315 23:17:59.997957   97539 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:17:59.997978   97539 status.go:255] checking status of ha-285481-m03 ...
	I0315 23:17:59.998278   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:17:59.998313   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:00.013971   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34545
	I0315 23:18:00.014372   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:00.014918   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:18:00.014945   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:00.015255   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:00.015460   97539 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:18:00.017328   97539 status.go:330] ha-285481-m03 host status = "Running" (err=<nil>)
	I0315 23:18:00.017345   97539 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:18:00.017686   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:00.017738   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:00.031606   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40017
	I0315 23:18:00.031971   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:00.032451   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:18:00.032472   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:00.032744   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:00.032941   97539 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:18:00.035888   97539 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:18:00.036337   97539 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:18:00.036358   97539 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:18:00.036562   97539 host.go:66] Checking if "ha-285481-m03" exists ...
	I0315 23:18:00.036869   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:00.036903   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:00.051883   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42535
	I0315 23:18:00.052299   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:00.052700   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:18:00.052731   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:00.053014   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:00.053199   97539 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:18:00.053381   97539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:18:00.053403   97539 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:18:00.056224   97539 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:18:00.056728   97539 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:18:00.056764   97539 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:18:00.056876   97539 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:18:00.057045   97539 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:18:00.057178   97539 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:18:00.057302   97539 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:18:00.141877   97539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:18:00.158904   97539 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:18:00.158933   97539 api_server.go:166] Checking apiserver status ...
	I0315 23:18:00.158974   97539 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:18:00.179260   97539 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup
	W0315 23:18:00.189232   97539 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1452/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:18:00.189293   97539 ssh_runner.go:195] Run: ls
	I0315 23:18:00.193983   97539 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:18:00.201266   97539 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:18:00.201291   97539 status.go:422] ha-285481-m03 apiserver status = Running (err=<nil>)
	I0315 23:18:00.201299   97539 status.go:257] ha-285481-m03 status: &{Name:ha-285481-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:18:00.201314   97539 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:18:00.201632   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:00.201668   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:00.217127   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40015
	I0315 23:18:00.217606   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:00.218121   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:18:00.218143   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:00.218434   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:00.218630   97539 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:18:00.220325   97539 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:18:00.220349   97539 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:18:00.220647   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:00.220693   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:00.236095   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33957
	I0315 23:18:00.236475   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:00.236898   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:18:00.236920   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:00.237241   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:00.237443   97539 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:18:00.240061   97539 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:18:00.240476   97539 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:18:00.240507   97539 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:18:00.240707   97539 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:18:00.241013   97539 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:00.241056   97539 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:00.255356   97539 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36189
	I0315 23:18:00.255763   97539 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:00.256234   97539 main.go:141] libmachine: Using API Version  1
	I0315 23:18:00.256258   97539 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:00.256593   97539 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:00.256796   97539 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:18:00.256982   97539 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:18:00.257002   97539 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:18:00.260125   97539 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:18:00.260597   97539 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:18:00.260624   97539 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:18:00.260782   97539 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:18:00.260972   97539 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:18:00.261165   97539 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:18:00.261302   97539 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:18:00.347635   97539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:18:00.364220   97539 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-285481 -n ha-285481
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-285481 logs -n 25: (1.61684225s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481:/home/docker/cp-test_ha-285481-m03_ha-285481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481 sudo cat                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m04 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp testdata/cp-test.txt                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481:/home/docker/cp-test_ha-285481-m04_ha-285481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481 sudo cat                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03:/home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m03 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-285481 node stop m02 -v=7                                                     | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-285481 node start m02 -v=7                                                    | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 23:09:55
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 23:09:55.829425   92071 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:09:55.829892   92071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:55.829911   92071 out.go:304] Setting ErrFile to fd 2...
	I0315 23:09:55.829918   92071 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:55.830376   92071 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:09:55.831360   92071 out.go:298] Setting JSON to false
	I0315 23:09:55.832277   92071 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6746,"bootTime":1710537450,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:09:55.832345   92071 start.go:139] virtualization: kvm guest
	I0315 23:09:55.834345   92071 out.go:177] * [ha-285481] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:09:55.835694   92071 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:09:55.835735   92071 notify.go:220] Checking for updates...
	I0315 23:09:55.836938   92071 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:09:55.838167   92071 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:09:55.839539   92071 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:55.840906   92071 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:09:55.842290   92071 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:09:55.843777   92071 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:09:55.877928   92071 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 23:09:55.879144   92071 start.go:297] selected driver: kvm2
	I0315 23:09:55.879164   92071 start.go:901] validating driver "kvm2" against <nil>
	I0315 23:09:55.879176   92071 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:09:55.879928   92071 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:09:55.880022   92071 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:09:55.894520   92071 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:09:55.894572   92071 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 23:09:55.894762   92071 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:09:55.894823   92071 cni.go:84] Creating CNI manager for ""
	I0315 23:09:55.894836   92071 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0315 23:09:55.894840   92071 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0315 23:09:55.894890   92071 start.go:340] cluster config:
	{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0315 23:09:55.895006   92071 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:09:55.896676   92071 out.go:177] * Starting "ha-285481" primary control-plane node in "ha-285481" cluster
	I0315 23:09:55.897810   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:09:55.897836   92071 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 23:09:55.897843   92071 cache.go:56] Caching tarball of preloaded images
	I0315 23:09:55.897913   92071 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:09:55.897923   92071 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:09:55.898203   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:09:55.898221   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json: {Name:mkaa91889e299a827fa98bd8233aee91a275a9da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:09:55.898345   92071 start.go:360] acquireMachinesLock for ha-285481: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:09:55.898371   92071 start.go:364] duration metric: took 13.866µs to acquireMachinesLock for "ha-285481"
	I0315 23:09:55.898387   92071 start.go:93] Provisioning new machine with config: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:09:55.898436   92071 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 23:09:55.900023   92071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:09:55.900136   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:09:55.900169   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:09:55.913773   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33699
	I0315 23:09:55.914175   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:09:55.914696   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:09:55.914717   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:09:55.915065   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:09:55.915244   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:09:55.915397   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:09:55.915526   92071 start.go:159] libmachine.API.Create for "ha-285481" (driver="kvm2")
	I0315 23:09:55.915561   92071 client.go:168] LocalClient.Create starting
	I0315 23:09:55.915594   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:09:55.915627   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:09:55.915643   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:09:55.915694   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:09:55.915712   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:09:55.915735   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:09:55.915754   92071 main.go:141] libmachine: Running pre-create checks...
	I0315 23:09:55.915766   92071 main.go:141] libmachine: (ha-285481) Calling .PreCreateCheck
	I0315 23:09:55.916074   92071 main.go:141] libmachine: (ha-285481) Calling .GetConfigRaw
	I0315 23:09:55.916392   92071 main.go:141] libmachine: Creating machine...
	I0315 23:09:55.916404   92071 main.go:141] libmachine: (ha-285481) Calling .Create
	I0315 23:09:55.916528   92071 main.go:141] libmachine: (ha-285481) Creating KVM machine...
	I0315 23:09:55.917654   92071 main.go:141] libmachine: (ha-285481) DBG | found existing default KVM network
	I0315 23:09:55.918345   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:55.918228   92093 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f1e0}
	I0315 23:09:55.918403   92071 main.go:141] libmachine: (ha-285481) DBG | created network xml: 
	I0315 23:09:55.918424   92071 main.go:141] libmachine: (ha-285481) DBG | <network>
	I0315 23:09:55.918432   92071 main.go:141] libmachine: (ha-285481) DBG |   <name>mk-ha-285481</name>
	I0315 23:09:55.918442   92071 main.go:141] libmachine: (ha-285481) DBG |   <dns enable='no'/>
	I0315 23:09:55.918466   92071 main.go:141] libmachine: (ha-285481) DBG |   
	I0315 23:09:55.918481   92071 main.go:141] libmachine: (ha-285481) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 23:09:55.918490   92071 main.go:141] libmachine: (ha-285481) DBG |     <dhcp>
	I0315 23:09:55.918502   92071 main.go:141] libmachine: (ha-285481) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 23:09:55.918520   92071 main.go:141] libmachine: (ha-285481) DBG |     </dhcp>
	I0315 23:09:55.918529   92071 main.go:141] libmachine: (ha-285481) DBG |   </ip>
	I0315 23:09:55.918573   92071 main.go:141] libmachine: (ha-285481) DBG |   
	I0315 23:09:55.918598   92071 main.go:141] libmachine: (ha-285481) DBG | </network>
	I0315 23:09:55.918610   92071 main.go:141] libmachine: (ha-285481) DBG | 
	I0315 23:09:55.923112   92071 main.go:141] libmachine: (ha-285481) DBG | trying to create private KVM network mk-ha-285481 192.168.39.0/24...
	I0315 23:09:55.994643   92071 main.go:141] libmachine: (ha-285481) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481 ...
	I0315 23:09:55.994672   92071 main.go:141] libmachine: (ha-285481) DBG | private KVM network mk-ha-285481 192.168.39.0/24 created
	I0315 23:09:55.994691   92071 main.go:141] libmachine: (ha-285481) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:09:55.994719   92071 main.go:141] libmachine: (ha-285481) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:09:55.994736   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:55.994570   92093 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:56.241606   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:56.241464   92093 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa...
	I0315 23:09:56.279521   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:56.279414   92093 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/ha-285481.rawdisk...
	I0315 23:09:56.279549   92071 main.go:141] libmachine: (ha-285481) DBG | Writing magic tar header
	I0315 23:09:56.279559   92071 main.go:141] libmachine: (ha-285481) DBG | Writing SSH key tar header
	I0315 23:09:56.279566   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:56.279532   92093 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481 ...
	I0315 23:09:56.279703   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481
	I0315 23:09:56.279728   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:09:56.279741   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481 (perms=drwx------)
	I0315 23:09:56.279757   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:09:56.279768   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:09:56.279784   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:09:56.279797   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:09:56.279817   92071 main.go:141] libmachine: (ha-285481) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:09:56.279830   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:56.279845   92071 main.go:141] libmachine: (ha-285481) Creating domain...
	I0315 23:09:56.279859   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:09:56.279874   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:09:56.279885   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:09:56.279896   92071 main.go:141] libmachine: (ha-285481) DBG | Checking permissions on dir: /home
	I0315 23:09:56.279905   92071 main.go:141] libmachine: (ha-285481) DBG | Skipping /home - not owner
	I0315 23:09:56.280884   92071 main.go:141] libmachine: (ha-285481) define libvirt domain using xml: 
	I0315 23:09:56.280905   92071 main.go:141] libmachine: (ha-285481) <domain type='kvm'>
	I0315 23:09:56.280911   92071 main.go:141] libmachine: (ha-285481)   <name>ha-285481</name>
	I0315 23:09:56.280920   92071 main.go:141] libmachine: (ha-285481)   <memory unit='MiB'>2200</memory>
	I0315 23:09:56.280928   92071 main.go:141] libmachine: (ha-285481)   <vcpu>2</vcpu>
	I0315 23:09:56.280934   92071 main.go:141] libmachine: (ha-285481)   <features>
	I0315 23:09:56.280942   92071 main.go:141] libmachine: (ha-285481)     <acpi/>
	I0315 23:09:56.280948   92071 main.go:141] libmachine: (ha-285481)     <apic/>
	I0315 23:09:56.280964   92071 main.go:141] libmachine: (ha-285481)     <pae/>
	I0315 23:09:56.280969   92071 main.go:141] libmachine: (ha-285481)     
	I0315 23:09:56.280974   92071 main.go:141] libmachine: (ha-285481)   </features>
	I0315 23:09:56.280979   92071 main.go:141] libmachine: (ha-285481)   <cpu mode='host-passthrough'>
	I0315 23:09:56.280983   92071 main.go:141] libmachine: (ha-285481)   
	I0315 23:09:56.280992   92071 main.go:141] libmachine: (ha-285481)   </cpu>
	I0315 23:09:56.281017   92071 main.go:141] libmachine: (ha-285481)   <os>
	I0315 23:09:56.281041   92071 main.go:141] libmachine: (ha-285481)     <type>hvm</type>
	I0315 23:09:56.281051   92071 main.go:141] libmachine: (ha-285481)     <boot dev='cdrom'/>
	I0315 23:09:56.281071   92071 main.go:141] libmachine: (ha-285481)     <boot dev='hd'/>
	I0315 23:09:56.281085   92071 main.go:141] libmachine: (ha-285481)     <bootmenu enable='no'/>
	I0315 23:09:56.281095   92071 main.go:141] libmachine: (ha-285481)   </os>
	I0315 23:09:56.281106   92071 main.go:141] libmachine: (ha-285481)   <devices>
	I0315 23:09:56.281117   92071 main.go:141] libmachine: (ha-285481)     <disk type='file' device='cdrom'>
	I0315 23:09:56.281134   92071 main.go:141] libmachine: (ha-285481)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/boot2docker.iso'/>
	I0315 23:09:56.281149   92071 main.go:141] libmachine: (ha-285481)       <target dev='hdc' bus='scsi'/>
	I0315 23:09:56.281163   92071 main.go:141] libmachine: (ha-285481)       <readonly/>
	I0315 23:09:56.281173   92071 main.go:141] libmachine: (ha-285481)     </disk>
	I0315 23:09:56.281185   92071 main.go:141] libmachine: (ha-285481)     <disk type='file' device='disk'>
	I0315 23:09:56.281197   92071 main.go:141] libmachine: (ha-285481)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:09:56.281212   92071 main.go:141] libmachine: (ha-285481)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/ha-285481.rawdisk'/>
	I0315 23:09:56.281227   92071 main.go:141] libmachine: (ha-285481)       <target dev='hda' bus='virtio'/>
	I0315 23:09:56.281239   92071 main.go:141] libmachine: (ha-285481)     </disk>
	I0315 23:09:56.281247   92071 main.go:141] libmachine: (ha-285481)     <interface type='network'>
	I0315 23:09:56.281259   92071 main.go:141] libmachine: (ha-285481)       <source network='mk-ha-285481'/>
	I0315 23:09:56.281267   92071 main.go:141] libmachine: (ha-285481)       <model type='virtio'/>
	I0315 23:09:56.281278   92071 main.go:141] libmachine: (ha-285481)     </interface>
	I0315 23:09:56.281289   92071 main.go:141] libmachine: (ha-285481)     <interface type='network'>
	I0315 23:09:56.281308   92071 main.go:141] libmachine: (ha-285481)       <source network='default'/>
	I0315 23:09:56.281332   92071 main.go:141] libmachine: (ha-285481)       <model type='virtio'/>
	I0315 23:09:56.281344   92071 main.go:141] libmachine: (ha-285481)     </interface>
	I0315 23:09:56.281354   92071 main.go:141] libmachine: (ha-285481)     <serial type='pty'>
	I0315 23:09:56.281367   92071 main.go:141] libmachine: (ha-285481)       <target port='0'/>
	I0315 23:09:56.281381   92071 main.go:141] libmachine: (ha-285481)     </serial>
	I0315 23:09:56.281394   92071 main.go:141] libmachine: (ha-285481)     <console type='pty'>
	I0315 23:09:56.281405   92071 main.go:141] libmachine: (ha-285481)       <target type='serial' port='0'/>
	I0315 23:09:56.281419   92071 main.go:141] libmachine: (ha-285481)     </console>
	I0315 23:09:56.281428   92071 main.go:141] libmachine: (ha-285481)     <rng model='virtio'>
	I0315 23:09:56.281436   92071 main.go:141] libmachine: (ha-285481)       <backend model='random'>/dev/random</backend>
	I0315 23:09:56.281446   92071 main.go:141] libmachine: (ha-285481)     </rng>
	I0315 23:09:56.281460   92071 main.go:141] libmachine: (ha-285481)     
	I0315 23:09:56.281471   92071 main.go:141] libmachine: (ha-285481)     
	I0315 23:09:56.281481   92071 main.go:141] libmachine: (ha-285481)   </devices>
	I0315 23:09:56.281492   92071 main.go:141] libmachine: (ha-285481) </domain>
	I0315 23:09:56.281501   92071 main.go:141] libmachine: (ha-285481) 
	I0315 23:09:56.285700   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:db:c7:8c in network default
	I0315 23:09:56.286236   92071 main.go:141] libmachine: (ha-285481) Ensuring networks are active...
	I0315 23:09:56.286255   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:56.286909   92071 main.go:141] libmachine: (ha-285481) Ensuring network default is active
	I0315 23:09:56.287292   92071 main.go:141] libmachine: (ha-285481) Ensuring network mk-ha-285481 is active
	I0315 23:09:56.287861   92071 main.go:141] libmachine: (ha-285481) Getting domain xml...
	I0315 23:09:56.288631   92071 main.go:141] libmachine: (ha-285481) Creating domain...
	I0315 23:09:57.454593   92071 main.go:141] libmachine: (ha-285481) Waiting to get IP...
	I0315 23:09:57.455445   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:57.455860   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:57.455920   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:57.455859   92093 retry.go:31] will retry after 303.440345ms: waiting for machine to come up
	I0315 23:09:57.761405   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:57.761884   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:57.761915   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:57.761840   92093 retry.go:31] will retry after 353.723834ms: waiting for machine to come up
	I0315 23:09:58.117512   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:58.117940   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:58.117961   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:58.117876   92093 retry.go:31] will retry after 425.710423ms: waiting for machine to come up
	I0315 23:09:58.545353   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:58.545839   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:58.545867   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:58.545786   92093 retry.go:31] will retry after 592.484289ms: waiting for machine to come up
	I0315 23:09:59.139667   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:59.140172   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:59.140211   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:59.140142   92093 retry.go:31] will retry after 656.027969ms: waiting for machine to come up
	I0315 23:09:59.797914   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:09:59.798347   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:09:59.798376   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:09:59.798294   92093 retry.go:31] will retry after 647.178612ms: waiting for machine to come up
	I0315 23:10:00.447161   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:00.447598   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:00.447636   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:00.447542   92093 retry.go:31] will retry after 1.030593597s: waiting for machine to come up
	I0315 23:10:01.479515   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:01.479916   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:01.479972   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:01.479896   92093 retry.go:31] will retry after 1.239485655s: waiting for machine to come up
	I0315 23:10:02.720509   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:02.720970   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:02.721000   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:02.720900   92093 retry.go:31] will retry after 1.308366089s: waiting for machine to come up
	I0315 23:10:04.031407   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:04.031731   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:04.031757   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:04.031709   92093 retry.go:31] will retry after 2.03239829s: waiting for machine to come up
	I0315 23:10:06.065771   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:06.066130   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:06.066178   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:06.066092   92093 retry.go:31] will retry after 2.159259052s: waiting for machine to come up
	I0315 23:10:08.228491   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:08.228961   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:08.228989   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:08.228908   92093 retry.go:31] will retry after 2.816344286s: waiting for machine to come up
	I0315 23:10:11.047182   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:11.047578   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:11.047607   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:11.047526   92093 retry.go:31] will retry after 3.09430771s: waiting for machine to come up
	I0315 23:10:14.145796   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:14.146239   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find current IP address of domain ha-285481 in network mk-ha-285481
	I0315 23:10:14.146270   92071 main.go:141] libmachine: (ha-285481) DBG | I0315 23:10:14.146194   92093 retry.go:31] will retry after 5.256327871s: waiting for machine to come up
	I0315 23:10:19.406569   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.407105   92071 main.go:141] libmachine: (ha-285481) Found IP for machine: 192.168.39.23
	I0315 23:10:19.407134   92071 main.go:141] libmachine: (ha-285481) Reserving static IP address...
	I0315 23:10:19.407147   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has current primary IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.407624   92071 main.go:141] libmachine: (ha-285481) DBG | unable to find host DHCP lease matching {name: "ha-285481", mac: "52:54:00:b7:7a:0e", ip: "192.168.39.23"} in network mk-ha-285481
	I0315 23:10:19.481094   92071 main.go:141] libmachine: (ha-285481) DBG | Getting to WaitForSSH function...
	I0315 23:10:19.481132   92071 main.go:141] libmachine: (ha-285481) Reserved static IP address: 192.168.39.23
	I0315 23:10:19.481145   92071 main.go:141] libmachine: (ha-285481) Waiting for SSH to be available...
	I0315 23:10:19.483843   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.484309   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.484336   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.484495   92071 main.go:141] libmachine: (ha-285481) DBG | Using SSH client type: external
	I0315 23:10:19.484528   92071 main.go:141] libmachine: (ha-285481) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa (-rw-------)
	I0315 23:10:19.484582   92071 main.go:141] libmachine: (ha-285481) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.23 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 23:10:19.484598   92071 main.go:141] libmachine: (ha-285481) DBG | About to run SSH command:
	I0315 23:10:19.484637   92071 main.go:141] libmachine: (ha-285481) DBG | exit 0
	I0315 23:10:19.607423   92071 main.go:141] libmachine: (ha-285481) DBG | SSH cmd err, output: <nil>: 
	I0315 23:10:19.607745   92071 main.go:141] libmachine: (ha-285481) KVM machine creation complete!
	I0315 23:10:19.608197   92071 main.go:141] libmachine: (ha-285481) Calling .GetConfigRaw
	I0315 23:10:19.608743   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:19.608914   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:19.609046   92071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 23:10:19.609056   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:19.610303   92071 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 23:10:19.610320   92071 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 23:10:19.610325   92071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 23:10:19.610341   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.612773   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.613134   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.613162   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.613296   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.613505   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.613672   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.613810   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.613987   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.614200   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.614214   92071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 23:10:19.710588   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:10:19.710617   92071 main.go:141] libmachine: Detecting the provisioner...
	I0315 23:10:19.710626   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.713746   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.714059   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.714092   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.714252   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.714433   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.714623   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.714772   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.714951   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.715170   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.715186   92071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 23:10:19.816386   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 23:10:19.816464   92071 main.go:141] libmachine: found compatible host: buildroot
	I0315 23:10:19.816482   92071 main.go:141] libmachine: Provisioning with buildroot...
	I0315 23:10:19.816491   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:10:19.816804   92071 buildroot.go:166] provisioning hostname "ha-285481"
	I0315 23:10:19.816834   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:10:19.817029   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.819775   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.820151   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.820179   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.820410   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.820602   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.820759   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.820916   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.821115   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.821292   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.821304   92071 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481 && echo "ha-285481" | sudo tee /etc/hostname
	I0315 23:10:19.932724   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481
	
	I0315 23:10:19.932748   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:19.935563   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.935915   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:19.935952   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:19.936178   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:19.936435   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.936621   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:19.936801   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:19.937033   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:19.937206   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:19.937221   92071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:10:20.045028   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:10:20.045056   92071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:10:20.045098   92071 buildroot.go:174] setting up certificates
	I0315 23:10:20.045111   92071 provision.go:84] configureAuth start
	I0315 23:10:20.045121   92071 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:10:20.045441   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:20.048493   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.048847   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.048873   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.049038   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.051186   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.051594   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.051617   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.051770   92071 provision.go:143] copyHostCerts
	I0315 23:10:20.051814   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:10:20.051849   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:10:20.051858   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:10:20.051923   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:10:20.052019   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:10:20.052038   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:10:20.052045   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:10:20.052077   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:10:20.052124   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:10:20.052141   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:10:20.052147   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:10:20.052166   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:10:20.052209   92071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481 san=[127.0.0.1 192.168.39.23 ha-285481 localhost minikube]
	I0315 23:10:20.169384   92071 provision.go:177] copyRemoteCerts
	I0315 23:10:20.169453   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:10:20.169478   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.172180   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.172464   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.172503   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.172653   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.172835   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.172977   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.173128   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.254138   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:10:20.254208   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:10:20.279254   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:10:20.279331   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0315 23:10:20.310178   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:10:20.310236   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:10:20.335032   92071 provision.go:87] duration metric: took 289.890096ms to configureAuth
	I0315 23:10:20.335071   92071 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:10:20.335299   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:10:20.335415   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.338003   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.338364   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.338388   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.338612   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.338796   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.338935   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.339043   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.339242   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:20.339444   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:20.339460   92071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:10:20.596446   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:10:20.596471   92071 main.go:141] libmachine: Checking connection to Docker...
	I0315 23:10:20.596479   92071 main.go:141] libmachine: (ha-285481) Calling .GetURL
	I0315 23:10:20.597925   92071 main.go:141] libmachine: (ha-285481) DBG | Using libvirt version 6000000
	I0315 23:10:20.600348   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.600732   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.600758   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.601068   92071 main.go:141] libmachine: Docker is up and running!
	I0315 23:10:20.601085   92071 main.go:141] libmachine: Reticulating splines...
	I0315 23:10:20.601093   92071 client.go:171] duration metric: took 24.685520422s to LocalClient.Create
	I0315 23:10:20.601115   92071 start.go:167] duration metric: took 24.685590841s to libmachine.API.Create "ha-285481"
	I0315 23:10:20.601129   92071 start.go:293] postStartSetup for "ha-285481" (driver="kvm2")
	I0315 23:10:20.601142   92071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:10:20.601165   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.601427   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:10:20.601451   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.603810   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.604189   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.604217   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.604380   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.604571   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.604815   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.604994   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.686497   92071 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:10:20.691241   92071 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:10:20.691266   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:10:20.691341   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:10:20.691434   92071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:10:20.691450   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:10:20.691584   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:10:20.701133   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:10:20.726838   92071 start.go:296] duration metric: took 125.695353ms for postStartSetup
	I0315 23:10:20.726911   92071 main.go:141] libmachine: (ha-285481) Calling .GetConfigRaw
	I0315 23:10:20.727477   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:20.730235   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.730709   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.730741   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.731002   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:10:20.731283   92071 start.go:128] duration metric: took 24.832834817s to createHost
	I0315 23:10:20.731346   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.733616   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.733937   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.733965   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.734066   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.734236   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.734383   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.734498   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.734684   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:10:20.734902   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:10:20.734920   92071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:10:20.836271   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544220.811161677
	
	I0315 23:10:20.836294   92071 fix.go:216] guest clock: 1710544220.811161677
	I0315 23:10:20.836301   92071 fix.go:229] Guest: 2024-03-15 23:10:20.811161677 +0000 UTC Remote: 2024-03-15 23:10:20.731302898 +0000 UTC m=+24.949631004 (delta=79.858779ms)
	I0315 23:10:20.836320   92071 fix.go:200] guest clock delta is within tolerance: 79.858779ms
	I0315 23:10:20.836325   92071 start.go:83] releasing machines lock for "ha-285481", held for 24.937945305s
	I0315 23:10:20.836341   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.836653   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:20.839376   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.839760   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.839784   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.839935   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.840574   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.840847   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:20.840976   92071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:10:20.841032   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.841088   92071 ssh_runner.go:195] Run: cat /version.json
	I0315 23:10:20.841118   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:20.843599   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.843990   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.844036   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.844404   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.844621   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.845134   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.845300   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.845459   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.845548   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:20.845580   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:20.845787   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:20.845975   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:20.846141   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:20.846294   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:20.920798   92071 ssh_runner.go:195] Run: systemctl --version
	I0315 23:10:20.939655   92071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:10:21.103285   92071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:10:21.109211   92071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:10:21.109277   92071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:10:21.126535   92071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 23:10:21.126563   92071 start.go:494] detecting cgroup driver to use...
	I0315 23:10:21.126620   92071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:10:21.143540   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:10:21.158524   92071 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:10:21.158600   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:10:21.173250   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:10:21.187931   92071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:10:21.304391   92071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:10:21.453889   92071 docker.go:233] disabling docker service ...
	I0315 23:10:21.453952   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:10:21.470128   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:10:21.484143   92071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:10:21.612833   92071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:10:21.730896   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:10:21.745268   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:10:21.763756   92071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:10:21.763801   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.774440   92071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:10:21.774515   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.785161   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.795957   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:10:21.807171   92071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:10:21.818498   92071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:10:21.828168   92071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 23:10:21.828228   92071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 23:10:21.841560   92071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:10:21.851376   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:10:21.966238   92071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:10:22.106233   92071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:10:22.106331   92071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:10:22.111335   92071 start.go:562] Will wait 60s for crictl version
	I0315 23:10:22.111405   92071 ssh_runner.go:195] Run: which crictl
	I0315 23:10:22.115431   92071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:10:22.151571   92071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:10:22.151661   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:10:22.181391   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:10:22.212534   92071 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:10:22.213905   92071 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:10:22.216786   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:22.217200   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:22.217226   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:22.217396   92071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:10:22.221797   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:10:22.235310   92071 kubeadm.go:877] updating cluster {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 23:10:22.235457   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:10:22.235530   92071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:10:22.271196   92071 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0315 23:10:22.271272   92071 ssh_runner.go:195] Run: which lz4
	I0315 23:10:22.275662   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0315 23:10:22.275745   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0315 23:10:22.280159   92071 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0315 23:10:22.280183   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0315 23:10:23.946320   92071 crio.go:444] duration metric: took 1.670597717s to copy over tarball
	I0315 23:10:23.946382   92071 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0315 23:10:26.362268   92071 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.415857165s)
	I0315 23:10:26.362308   92071 crio.go:451] duration metric: took 2.415961561s to extract the tarball
	I0315 23:10:26.362325   92071 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0315 23:10:26.403571   92071 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:10:26.462157   92071 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:10:26.462179   92071 cache_images.go:84] Images are preloaded, skipping loading
	I0315 23:10:26.462188   92071 kubeadm.go:928] updating node { 192.168.39.23 8443 v1.28.4 crio true true} ...
	I0315 23:10:26.462317   92071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:10:26.462382   92071 ssh_runner.go:195] Run: crio config
	I0315 23:10:26.520625   92071 cni.go:84] Creating CNI manager for ""
	I0315 23:10:26.520660   92071 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 23:10:26.520679   92071 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 23:10:26.520708   92071 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-285481 NodeName:ha-285481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 23:10:26.520882   92071 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-285481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 23:10:26.520916   92071 kube-vip.go:111] generating kube-vip config ...
	I0315 23:10:26.520969   92071 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:10:26.543023   92071 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:10:26.543165   92071 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:10:26.543227   92071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:10:26.560611   92071 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 23:10:26.560701   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 23:10:26.576707   92071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 23:10:26.595529   92071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:10:26.614498   92071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 23:10:26.633442   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:10:26.652735   92071 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:10:26.657112   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:10:26.670904   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:10:26.805490   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:10:26.824183   92071 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.23
	I0315 23:10:26.824213   92071 certs.go:194] generating shared ca certs ...
	I0315 23:10:26.824245   92071 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:26.824451   92071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:10:26.824519   92071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:10:26.824536   92071 certs.go:256] generating profile certs ...
	I0315 23:10:26.824608   92071 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:10:26.824639   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt with IP's: []
	I0315 23:10:26.980160   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt ...
	I0315 23:10:26.980192   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt: {Name:mk1c5048a214d2dced4203732d39a9764f6dbaea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:26.980376   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key ...
	I0315 23:10:26.980393   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key: {Name:mka52854b81f06993ecaf7335cb216481234bb75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:26.980505   92071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1
	I0315 23:10:26.980528   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.254]
	I0315 23:10:27.243461   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1 ...
	I0315 23:10:27.243497   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1: {Name:mkbb04bbe69628bfaf0244064cb50aa428de2a9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.243668   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1 ...
	I0315 23:10:27.243683   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1: {Name:mk4aef05ec6f5326a9ced309014d7fc8e63afdaa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.243779   92071 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.d013a8e1 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:10:27.243865   92071 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.d013a8e1 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:10:27.243932   92071 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:10:27.243959   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt with IP's: []
	I0315 23:10:27.446335   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt ...
	I0315 23:10:27.446368   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt: {Name:mkd5ef2f928a3f3f8755be3f9f58bef8a980c22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.446534   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key ...
	I0315 23:10:27.446545   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key: {Name:mkcb6e466047f30218160eb49e0092e4e744f66e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:27.446618   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:10:27.446639   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:10:27.446653   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:10:27.446671   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:10:27.446685   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:10:27.446699   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:10:27.446711   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:10:27.446721   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:10:27.446770   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:10:27.446802   92071 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:10:27.446812   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:10:27.446831   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:10:27.446853   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:10:27.446878   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:10:27.446916   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:10:27.446947   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.446960   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.446973   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.447580   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:10:27.478347   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:10:27.503643   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:10:27.528078   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:10:27.552558   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 23:10:27.578130   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:10:27.605290   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:10:27.634366   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:10:27.657942   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:10:27.711571   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:10:27.736341   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:10:27.760213   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 23:10:27.777516   92071 ssh_runner.go:195] Run: openssl version
	I0315 23:10:27.783452   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:10:27.795425   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.800173   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.800241   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:10:27.806206   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:10:27.816912   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:10:27.828583   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.833719   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.833778   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:10:27.839840   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:10:27.850988   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:10:27.861881   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.866445   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.866516   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:10:27.872427   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:10:27.883577   92071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:10:27.887936   92071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 23:10:27.887996   92071 kubeadm.go:391] StartCluster: {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:10:27.888090   92071 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 23:10:27.888144   92071 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 23:10:27.927845   92071 cri.go:89] found id: ""
	I0315 23:10:27.927951   92071 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0315 23:10:27.937856   92071 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0315 23:10:27.947451   92071 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0315 23:10:27.957523   92071 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0315 23:10:27.957543   92071 kubeadm.go:156] found existing configuration files:
	
	I0315 23:10:27.957589   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0315 23:10:27.967178   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0315 23:10:27.967226   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0315 23:10:27.977156   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0315 23:10:27.986703   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0315 23:10:27.986757   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0315 23:10:27.996624   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0315 23:10:28.006072   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0315 23:10:28.006145   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0315 23:10:28.015996   92071 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0315 23:10:28.025583   92071 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0315 23:10:28.025644   92071 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0315 23:10:28.035560   92071 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0315 23:10:28.128154   92071 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0315 23:10:28.128234   92071 kubeadm.go:309] [preflight] Running pre-flight checks
	I0315 23:10:28.269481   92071 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0315 23:10:28.269650   92071 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0315 23:10:28.269773   92071 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0315 23:10:28.538562   92071 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0315 23:10:28.699919   92071 out.go:204]   - Generating certificates and keys ...
	I0315 23:10:28.700037   92071 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0315 23:10:28.700107   92071 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0315 23:10:28.979659   92071 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0315 23:10:29.158651   92071 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0315 23:10:29.272023   92071 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0315 23:10:29.380691   92071 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0315 23:10:29.715709   92071 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0315 23:10:29.716013   92071 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-285481 localhost] and IPs [192.168.39.23 127.0.0.1 ::1]
	I0315 23:10:29.814560   92071 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0315 23:10:29.814767   92071 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-285481 localhost] and IPs [192.168.39.23 127.0.0.1 ::1]
	I0315 23:10:29.931539   92071 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0315 23:10:30.053177   92071 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0315 23:10:30.165551   92071 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0315 23:10:30.165840   92071 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0315 23:10:30.411059   92071 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0315 23:10:30.579213   92071 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0315 23:10:30.675045   92071 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0315 23:10:31.193059   92071 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0315 23:10:31.193642   92071 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0315 23:10:31.196710   92071 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0315 23:10:31.198759   92071 out.go:204]   - Booting up control plane ...
	I0315 23:10:31.198896   92071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0315 23:10:31.199032   92071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0315 23:10:31.199133   92071 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0315 23:10:31.214859   92071 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0315 23:10:31.215647   92071 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0315 23:10:31.215734   92071 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0315 23:10:31.344670   92071 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0315 23:10:37.937756   92071 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.595764 seconds
	I0315 23:10:37.937946   92071 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0315 23:10:37.963162   92071 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0315 23:10:38.495548   92071 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0315 23:10:38.495758   92071 kubeadm.go:309] [mark-control-plane] Marking the node ha-285481 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0315 23:10:39.014489   92071 kubeadm.go:309] [bootstrap-token] Using token: wgx4dt.9t39ji7sy70fmhdi
	I0315 23:10:39.016158   92071 out.go:204]   - Configuring RBAC rules ...
	I0315 23:10:39.016290   92071 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0315 23:10:39.027081   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0315 23:10:39.035488   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0315 23:10:39.039125   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0315 23:10:39.043055   92071 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0315 23:10:39.047230   92071 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0315 23:10:39.064125   92071 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0315 23:10:39.309590   92071 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0315 23:10:39.437940   92071 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0315 23:10:39.438770   92071 kubeadm.go:309] 
	I0315 23:10:39.438846   92071 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0315 23:10:39.438882   92071 kubeadm.go:309] 
	I0315 23:10:39.439007   92071 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0315 23:10:39.439019   92071 kubeadm.go:309] 
	I0315 23:10:39.439053   92071 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0315 23:10:39.439147   92071 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0315 23:10:39.439231   92071 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0315 23:10:39.439240   92071 kubeadm.go:309] 
	I0315 23:10:39.439303   92071 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0315 23:10:39.439313   92071 kubeadm.go:309] 
	I0315 23:10:39.439420   92071 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0315 23:10:39.439431   92071 kubeadm.go:309] 
	I0315 23:10:39.439509   92071 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0315 23:10:39.439618   92071 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0315 23:10:39.439716   92071 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0315 23:10:39.439725   92071 kubeadm.go:309] 
	I0315 23:10:39.439830   92071 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0315 23:10:39.439951   92071 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0315 23:10:39.439968   92071 kubeadm.go:309] 
	I0315 23:10:39.440049   92071 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wgx4dt.9t39ji7sy70fmhdi \
	I0315 23:10:39.440156   92071 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0315 23:10:39.440190   92071 kubeadm.go:309] 	--control-plane 
	I0315 23:10:39.440197   92071 kubeadm.go:309] 
	I0315 23:10:39.440270   92071 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0315 23:10:39.440278   92071 kubeadm.go:309] 
	I0315 23:10:39.440343   92071 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wgx4dt.9t39ji7sy70fmhdi \
	I0315 23:10:39.440467   92071 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0315 23:10:39.441186   92071 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0315 23:10:39.441304   92071 cni.go:84] Creating CNI manager for ""
	I0315 23:10:39.441324   92071 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0315 23:10:39.443030   92071 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0315 23:10:39.444360   92071 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0315 23:10:39.452442   92071 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0315 23:10:39.452470   92071 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0315 23:10:39.479017   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0315 23:10:40.508703   92071 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.029651541s)
	I0315 23:10:40.508761   92071 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0315 23:10:40.508913   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:40.508944   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-285481 minikube.k8s.io/updated_at=2024_03_15T23_10_40_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=ha-285481 minikube.k8s.io/primary=true
	I0315 23:10:40.527710   92071 ops.go:34] apiserver oom_adj: -16
	I0315 23:10:40.699851   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:41.200042   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:41.700032   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:42.200223   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:42.700035   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:43.200672   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:43.700602   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:44.200566   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:44.700883   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:45.200703   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:45.700875   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:46.200551   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:46.699949   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:47.200200   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:47.700707   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:48.200934   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:48.700781   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:49.200840   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:49.700165   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:50.200515   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:50.700009   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:51.200083   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0315 23:10:51.351359   92071 kubeadm.go:1107] duration metric: took 10.842519902s to wait for elevateKubeSystemPrivileges
	W0315 23:10:51.351400   92071 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0315 23:10:51.351410   92071 kubeadm.go:393] duration metric: took 23.463419886s to StartCluster
	I0315 23:10:51.351433   92071 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:51.351514   92071 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:10:51.352223   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:10:51.352454   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0315 23:10:51.352480   92071 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0315 23:10:51.352540   92071 addons.go:69] Setting storage-provisioner=true in profile "ha-285481"
	I0315 23:10:51.352452   92071 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:10:51.352577   92071 addons.go:69] Setting default-storageclass=true in profile "ha-285481"
	I0315 23:10:51.352583   92071 start.go:240] waiting for startup goroutines ...
	I0315 23:10:51.352569   92071 addons.go:234] Setting addon storage-provisioner=true in "ha-285481"
	I0315 23:10:51.352608   92071 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-285481"
	I0315 23:10:51.352636   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:10:51.352694   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:10:51.353014   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.353059   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.353089   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.353132   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.368263   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45581
	I0315 23:10:51.368668   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35087
	I0315 23:10:51.368801   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.369169   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.369443   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.369466   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.369830   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.369888   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.369896   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.370105   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:51.370231   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.370819   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.370861   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.372650   92071 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:10:51.372921   92071 kapi.go:59] client config for ha-285481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt", KeyFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key", CAFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0315 23:10:51.373485   92071 cert_rotation.go:137] Starting client certificate rotation controller
	I0315 23:10:51.373693   92071 addons.go:234] Setting addon default-storageclass=true in "ha-285481"
	I0315 23:10:51.373731   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:10:51.373976   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.374028   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.386183   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0315 23:10:51.386714   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.387358   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.387396   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.387789   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.387990   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:51.388337   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43637
	I0315 23:10:51.388722   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.389237   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.389270   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.389647   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.390010   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:51.390188   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:51.390235   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:51.391880   92071 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0315 23:10:51.393694   92071 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 23:10:51.393716   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0315 23:10:51.393738   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:51.397022   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.397583   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:51.397613   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.397898   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:51.398126   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:51.398316   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:51.398446   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:51.405908   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37469
	I0315 23:10:51.406318   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:51.406880   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:51.406906   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:51.407217   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:51.407429   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:10:51.408981   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:10:51.409247   92071 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0315 23:10:51.409266   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0315 23:10:51.409286   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:10:51.411992   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.412491   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:10:51.412516   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:10:51.412652   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:10:51.412869   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:10:51.413045   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:10:51.413215   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:10:51.587070   92071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0315 23:10:51.593632   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0315 23:10:51.614888   92071 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0315 23:10:52.653677   92071 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.060002507s)
	I0315 23:10:52.653716   92071 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0315 23:10:52.653758   92071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.066641759s)
	I0315 23:10:52.653800   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.653801   92071 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.038881304s)
	I0315 23:10:52.653839   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.653854   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.653812   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.654155   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.654172   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654182   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.654186   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654196   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.654197   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654204   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.654209   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654218   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.654226   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.654427   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654444   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654462   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.654518   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.654528   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.654643   92071 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0315 23:10:52.654650   92071 round_trippers.go:469] Request Headers:
	I0315 23:10:52.654660   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:10:52.654665   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:10:52.670314   92071 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0315 23:10:52.671233   92071 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0315 23:10:52.671253   92071 round_trippers.go:469] Request Headers:
	I0315 23:10:52.671264   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:10:52.671270   92071 round_trippers.go:473]     Content-Type: application/json
	I0315 23:10:52.671279   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:10:52.674221   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:10:52.674379   92071 main.go:141] libmachine: Making call to close driver server
	I0315 23:10:52.674394   92071 main.go:141] libmachine: (ha-285481) Calling .Close
	I0315 23:10:52.674674   92071 main.go:141] libmachine: Successfully made call to close driver server
	I0315 23:10:52.674697   92071 main.go:141] libmachine: Making call to close connection to plugin binary
	I0315 23:10:52.674697   92071 main.go:141] libmachine: (ha-285481) DBG | Closing plugin on server side
	I0315 23:10:52.676558   92071 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0315 23:10:52.677804   92071 addons.go:505] duration metric: took 1.325325685s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0315 23:10:52.677848   92071 start.go:245] waiting for cluster config update ...
	I0315 23:10:52.677864   92071 start.go:254] writing updated cluster config ...
	I0315 23:10:52.679616   92071 out.go:177] 
	I0315 23:10:52.681249   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:10:52.681355   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:10:52.683252   92071 out.go:177] * Starting "ha-285481-m02" control-plane node in "ha-285481" cluster
	I0315 23:10:52.684485   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:10:52.684526   92071 cache.go:56] Caching tarball of preloaded images
	I0315 23:10:52.684625   92071 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:10:52.684637   92071 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:10:52.684724   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:10:52.684916   92071 start.go:360] acquireMachinesLock for ha-285481-m02: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:10:52.684969   92071 start.go:364] duration metric: took 32.696µs to acquireMachinesLock for "ha-285481-m02"
	I0315 23:10:52.684987   92071 start.go:93] Provisioning new machine with config: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cert
Expiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:10:52.685053   92071 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0315 23:10:52.686516   92071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:10:52.686596   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:10:52.686637   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:10:52.701524   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46319
	I0315 23:10:52.701941   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:10:52.702411   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:10:52.702430   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:10:52.702759   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:10:52.702971   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:10:52.703137   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:10:52.703282   92071 start.go:159] libmachine.API.Create for "ha-285481" (driver="kvm2")
	I0315 23:10:52.703328   92071 client.go:168] LocalClient.Create starting
	I0315 23:10:52.703370   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:10:52.703410   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:10:52.703431   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:10:52.703505   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:10:52.703539   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:10:52.703564   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:10:52.703589   92071 main.go:141] libmachine: Running pre-create checks...
	I0315 23:10:52.703602   92071 main.go:141] libmachine: (ha-285481-m02) Calling .PreCreateCheck
	I0315 23:10:52.703772   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetConfigRaw
	I0315 23:10:52.704283   92071 main.go:141] libmachine: Creating machine...
	I0315 23:10:52.704299   92071 main.go:141] libmachine: (ha-285481-m02) Calling .Create
	I0315 23:10:52.704443   92071 main.go:141] libmachine: (ha-285481-m02) Creating KVM machine...
	I0315 23:10:52.705644   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found existing default KVM network
	I0315 23:10:52.705762   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found existing private KVM network mk-ha-285481
	I0315 23:10:52.705929   92071 main.go:141] libmachine: (ha-285481-m02) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02 ...
	I0315 23:10:52.705958   92071 main.go:141] libmachine: (ha-285481-m02) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:10:52.705996   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:52.705898   92430 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:10:52.706138   92071 main.go:141] libmachine: (ha-285481-m02) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:10:52.950484   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:52.950355   92430 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa...
	I0315 23:10:53.146048   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:53.145865   92430 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/ha-285481-m02.rawdisk...
	I0315 23:10:53.146096   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Writing magic tar header
	I0315 23:10:53.146158   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Writing SSH key tar header
	I0315 23:10:53.146191   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:53.146011   92430 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02 ...
	I0315 23:10:53.146214   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02 (perms=drwx------)
	I0315 23:10:53.146232   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:10:53.146242   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:10:53.146258   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:10:53.146270   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:10:53.146281   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02
	I0315 23:10:53.146295   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:10:53.146305   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:10:53.146342   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:10:53.146384   92071 main.go:141] libmachine: (ha-285481-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:10:53.146396   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:10:53.146408   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:10:53.146416   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Checking permissions on dir: /home
	I0315 23:10:53.146444   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Skipping /home - not owner
	I0315 23:10:53.146460   92071 main.go:141] libmachine: (ha-285481-m02) Creating domain...
	I0315 23:10:53.147595   92071 main.go:141] libmachine: (ha-285481-m02) define libvirt domain using xml: 
	I0315 23:10:53.147613   92071 main.go:141] libmachine: (ha-285481-m02) <domain type='kvm'>
	I0315 23:10:53.147620   92071 main.go:141] libmachine: (ha-285481-m02)   <name>ha-285481-m02</name>
	I0315 23:10:53.147624   92071 main.go:141] libmachine: (ha-285481-m02)   <memory unit='MiB'>2200</memory>
	I0315 23:10:53.147654   92071 main.go:141] libmachine: (ha-285481-m02)   <vcpu>2</vcpu>
	I0315 23:10:53.147685   92071 main.go:141] libmachine: (ha-285481-m02)   <features>
	I0315 23:10:53.147699   92071 main.go:141] libmachine: (ha-285481-m02)     <acpi/>
	I0315 23:10:53.147711   92071 main.go:141] libmachine: (ha-285481-m02)     <apic/>
	I0315 23:10:53.147720   92071 main.go:141] libmachine: (ha-285481-m02)     <pae/>
	I0315 23:10:53.147733   92071 main.go:141] libmachine: (ha-285481-m02)     
	I0315 23:10:53.147743   92071 main.go:141] libmachine: (ha-285481-m02)   </features>
	I0315 23:10:53.147756   92071 main.go:141] libmachine: (ha-285481-m02)   <cpu mode='host-passthrough'>
	I0315 23:10:53.147768   92071 main.go:141] libmachine: (ha-285481-m02)   
	I0315 23:10:53.147783   92071 main.go:141] libmachine: (ha-285481-m02)   </cpu>
	I0315 23:10:53.147796   92071 main.go:141] libmachine: (ha-285481-m02)   <os>
	I0315 23:10:53.147808   92071 main.go:141] libmachine: (ha-285481-m02)     <type>hvm</type>
	I0315 23:10:53.147821   92071 main.go:141] libmachine: (ha-285481-m02)     <boot dev='cdrom'/>
	I0315 23:10:53.147832   92071 main.go:141] libmachine: (ha-285481-m02)     <boot dev='hd'/>
	I0315 23:10:53.147842   92071 main.go:141] libmachine: (ha-285481-m02)     <bootmenu enable='no'/>
	I0315 23:10:53.147853   92071 main.go:141] libmachine: (ha-285481-m02)   </os>
	I0315 23:10:53.147882   92071 main.go:141] libmachine: (ha-285481-m02)   <devices>
	I0315 23:10:53.147910   92071 main.go:141] libmachine: (ha-285481-m02)     <disk type='file' device='cdrom'>
	I0315 23:10:53.147935   92071 main.go:141] libmachine: (ha-285481-m02)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/boot2docker.iso'/>
	I0315 23:10:53.147954   92071 main.go:141] libmachine: (ha-285481-m02)       <target dev='hdc' bus='scsi'/>
	I0315 23:10:53.147967   92071 main.go:141] libmachine: (ha-285481-m02)       <readonly/>
	I0315 23:10:53.147973   92071 main.go:141] libmachine: (ha-285481-m02)     </disk>
	I0315 23:10:53.147984   92071 main.go:141] libmachine: (ha-285481-m02)     <disk type='file' device='disk'>
	I0315 23:10:53.147993   92071 main.go:141] libmachine: (ha-285481-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:10:53.148008   92071 main.go:141] libmachine: (ha-285481-m02)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/ha-285481-m02.rawdisk'/>
	I0315 23:10:53.148018   92071 main.go:141] libmachine: (ha-285481-m02)       <target dev='hda' bus='virtio'/>
	I0315 23:10:53.148034   92071 main.go:141] libmachine: (ha-285481-m02)     </disk>
	I0315 23:10:53.148052   92071 main.go:141] libmachine: (ha-285481-m02)     <interface type='network'>
	I0315 23:10:53.148063   92071 main.go:141] libmachine: (ha-285481-m02)       <source network='mk-ha-285481'/>
	I0315 23:10:53.148075   92071 main.go:141] libmachine: (ha-285481-m02)       <model type='virtio'/>
	I0315 23:10:53.148087   92071 main.go:141] libmachine: (ha-285481-m02)     </interface>
	I0315 23:10:53.148099   92071 main.go:141] libmachine: (ha-285481-m02)     <interface type='network'>
	I0315 23:10:53.148113   92071 main.go:141] libmachine: (ha-285481-m02)       <source network='default'/>
	I0315 23:10:53.148129   92071 main.go:141] libmachine: (ha-285481-m02)       <model type='virtio'/>
	I0315 23:10:53.148142   92071 main.go:141] libmachine: (ha-285481-m02)     </interface>
	I0315 23:10:53.148151   92071 main.go:141] libmachine: (ha-285481-m02)     <serial type='pty'>
	I0315 23:10:53.148163   92071 main.go:141] libmachine: (ha-285481-m02)       <target port='0'/>
	I0315 23:10:53.148180   92071 main.go:141] libmachine: (ha-285481-m02)     </serial>
	I0315 23:10:53.148193   92071 main.go:141] libmachine: (ha-285481-m02)     <console type='pty'>
	I0315 23:10:53.148209   92071 main.go:141] libmachine: (ha-285481-m02)       <target type='serial' port='0'/>
	I0315 23:10:53.148222   92071 main.go:141] libmachine: (ha-285481-m02)     </console>
	I0315 23:10:53.148232   92071 main.go:141] libmachine: (ha-285481-m02)     <rng model='virtio'>
	I0315 23:10:53.148245   92071 main.go:141] libmachine: (ha-285481-m02)       <backend model='random'>/dev/random</backend>
	I0315 23:10:53.148256   92071 main.go:141] libmachine: (ha-285481-m02)     </rng>
	I0315 23:10:53.148266   92071 main.go:141] libmachine: (ha-285481-m02)     
	I0315 23:10:53.148280   92071 main.go:141] libmachine: (ha-285481-m02)     
	I0315 23:10:53.148293   92071 main.go:141] libmachine: (ha-285481-m02)   </devices>
	I0315 23:10:53.148304   92071 main.go:141] libmachine: (ha-285481-m02) </domain>
	I0315 23:10:53.148317   92071 main.go:141] libmachine: (ha-285481-m02) 
	I0315 23:10:53.156035   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3b:93:b0 in network default
	I0315 23:10:53.156657   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:53.156676   92071 main.go:141] libmachine: (ha-285481-m02) Ensuring networks are active...
	I0315 23:10:53.157473   92071 main.go:141] libmachine: (ha-285481-m02) Ensuring network default is active
	I0315 23:10:53.157847   92071 main.go:141] libmachine: (ha-285481-m02) Ensuring network mk-ha-285481 is active
	I0315 23:10:53.158188   92071 main.go:141] libmachine: (ha-285481-m02) Getting domain xml...
	I0315 23:10:53.158864   92071 main.go:141] libmachine: (ha-285481-m02) Creating domain...
	I0315 23:10:54.395963   92071 main.go:141] libmachine: (ha-285481-m02) Waiting to get IP...
	I0315 23:10:54.396746   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:54.397079   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:54.397110   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:54.397050   92430 retry.go:31] will retry after 252.967197ms: waiting for machine to come up
	I0315 23:10:54.651653   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:54.652024   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:54.652088   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:54.652003   92430 retry.go:31] will retry after 344.44741ms: waiting for machine to come up
	I0315 23:10:54.998750   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:54.999219   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:54.999253   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:54.999165   92430 retry.go:31] will retry after 389.245503ms: waiting for machine to come up
	I0315 23:10:55.389615   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:55.390116   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:55.390154   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:55.390063   92430 retry.go:31] will retry after 474.725516ms: waiting for machine to come up
	I0315 23:10:55.866614   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:55.867053   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:55.867089   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:55.867015   92430 retry.go:31] will retry after 576.819343ms: waiting for machine to come up
	I0315 23:10:56.445568   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:56.445991   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:56.446020   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:56.445928   92430 retry.go:31] will retry after 718.21589ms: waiting for machine to come up
	I0315 23:10:57.165796   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:57.166182   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:57.166212   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:57.166131   92430 retry.go:31] will retry after 1.005197331s: waiting for machine to come up
	I0315 23:10:58.173365   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:58.173972   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:58.174003   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:58.173918   92430 retry.go:31] will retry after 1.327098151s: waiting for machine to come up
	I0315 23:10:59.503386   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:10:59.503852   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:10:59.503876   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:10:59.503797   92430 retry.go:31] will retry after 1.270117038s: waiting for machine to come up
	I0315 23:11:00.776260   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:00.776734   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:00.776763   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:00.776676   92430 retry.go:31] will retry after 2.054242619s: waiting for machine to come up
	I0315 23:11:02.832772   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:02.833308   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:02.833337   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:02.833260   92430 retry.go:31] will retry after 2.37826086s: waiting for machine to come up
	I0315 23:11:05.214828   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:05.215339   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:05.215376   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:05.215266   92430 retry.go:31] will retry after 3.507325443s: waiting for machine to come up
	I0315 23:11:08.723867   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:08.724264   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:08.724292   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:08.724207   92430 retry.go:31] will retry after 2.857890161s: waiting for machine to come up
	I0315 23:11:11.585086   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:11.585402   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find current IP address of domain ha-285481-m02 in network mk-ha-285481
	I0315 23:11:11.585433   92071 main.go:141] libmachine: (ha-285481-m02) DBG | I0315 23:11:11.585372   92430 retry.go:31] will retry after 4.808833362s: waiting for machine to come up
	I0315 23:11:16.398917   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.399364   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has current primary IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.399391   92071 main.go:141] libmachine: (ha-285481-m02) Found IP for machine: 192.168.39.201
	I0315 23:11:16.399401   92071 main.go:141] libmachine: (ha-285481-m02) Reserving static IP address...
	I0315 23:11:16.399851   92071 main.go:141] libmachine: (ha-285481-m02) DBG | unable to find host DHCP lease matching {name: "ha-285481-m02", mac: "52:54:00:3a:fc:bf", ip: "192.168.39.201"} in network mk-ha-285481
	I0315 23:11:16.473894   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Getting to WaitForSSH function...
	I0315 23:11:16.473922   92071 main.go:141] libmachine: (ha-285481-m02) Reserved static IP address: 192.168.39.201
	I0315 23:11:16.473935   92071 main.go:141] libmachine: (ha-285481-m02) Waiting for SSH to be available...
	I0315 23:11:16.476347   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.476799   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:minikube Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.476823   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.476951   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Using SSH client type: external
	I0315 23:11:16.476980   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa (-rw-------)
	I0315 23:11:16.477097   92071 main.go:141] libmachine: (ha-285481-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.201 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 23:11:16.477125   92071 main.go:141] libmachine: (ha-285481-m02) DBG | About to run SSH command:
	I0315 23:11:16.477145   92071 main.go:141] libmachine: (ha-285481-m02) DBG | exit 0
	I0315 23:11:16.603541   92071 main.go:141] libmachine: (ha-285481-m02) DBG | SSH cmd err, output: <nil>: 
	I0315 23:11:16.603856   92071 main.go:141] libmachine: (ha-285481-m02) KVM machine creation complete!
	I0315 23:11:16.604122   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetConfigRaw
	I0315 23:11:16.604730   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:16.604917   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:16.605106   92071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 23:11:16.605123   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:11:16.606380   92071 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 23:11:16.606395   92071 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 23:11:16.606403   92071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 23:11:16.606411   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.608618   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.608975   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.609014   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.609134   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.609319   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.609481   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.609667   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.609835   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.610134   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.610154   92071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 23:11:16.714937   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:11:16.714971   92071 main.go:141] libmachine: Detecting the provisioner...
	I0315 23:11:16.714981   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.717751   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.718134   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.718154   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.718369   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.718590   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.718800   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.718941   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.719155   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.719422   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.719441   92071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 23:11:16.828443   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 23:11:16.828552   92071 main.go:141] libmachine: found compatible host: buildroot
	I0315 23:11:16.828569   92071 main.go:141] libmachine: Provisioning with buildroot...
	I0315 23:11:16.828581   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:11:16.828879   92071 buildroot.go:166] provisioning hostname "ha-285481-m02"
	I0315 23:11:16.828913   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:11:16.829091   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.832030   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.832496   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.832530   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.832666   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.832881   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.833079   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.833302   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.833478   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.833689   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.833707   92071 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481-m02 && echo "ha-285481-m02" | sudo tee /etc/hostname
	I0315 23:11:16.955677   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481-m02
	
	I0315 23:11:16.955702   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:16.958465   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.958831   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:16.958860   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:16.958998   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:16.959187   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.959308   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:16.959444   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:16.959565   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:16.959779   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:16.959801   92071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:11:17.080778   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:11:17.080820   92071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:11:17.080843   92071 buildroot.go:174] setting up certificates
	I0315 23:11:17.080855   92071 provision.go:84] configureAuth start
	I0315 23:11:17.080864   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetMachineName
	I0315 23:11:17.081196   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:17.083944   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.084264   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.084291   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.084433   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.086582   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.086977   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.087000   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.087145   92071 provision.go:143] copyHostCerts
	I0315 23:11:17.087175   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:11:17.087222   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:11:17.087232   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:11:17.087298   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:11:17.087394   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:11:17.087415   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:11:17.087420   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:11:17.087451   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:11:17.087503   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:11:17.087522   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:11:17.087525   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:11:17.087544   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:11:17.087593   92071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481-m02 san=[127.0.0.1 192.168.39.201 ha-285481-m02 localhost minikube]
	I0315 23:11:17.280830   92071 provision.go:177] copyRemoteCerts
	I0315 23:11:17.280889   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:11:17.280913   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.283506   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.283820   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.283841   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.284079   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.284304   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.284457   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.284593   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:17.370618   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:11:17.370737   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:11:17.395790   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:11:17.395871   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 23:11:17.421313   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:11:17.421397   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:11:17.446164   92071 provision.go:87] duration metric: took 365.293267ms to configureAuth
	I0315 23:11:17.446197   92071 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:11:17.446430   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:11:17.446532   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.449285   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.449590   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.449615   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.449830   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.450008   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.450220   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.450390   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.450557   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:17.450785   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:17.450806   92071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:11:17.749612   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:11:17.749640   92071 main.go:141] libmachine: Checking connection to Docker...
	I0315 23:11:17.749648   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetURL
	I0315 23:11:17.751024   92071 main.go:141] libmachine: (ha-285481-m02) DBG | Using libvirt version 6000000
	I0315 23:11:17.753064   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.753432   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.753459   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.753625   92071 main.go:141] libmachine: Docker is up and running!
	I0315 23:11:17.753635   92071 main.go:141] libmachine: Reticulating splines...
	I0315 23:11:17.753643   92071 client.go:171] duration metric: took 25.050302241s to LocalClient.Create
	I0315 23:11:17.753673   92071 start.go:167] duration metric: took 25.050395782s to libmachine.API.Create "ha-285481"
	I0315 23:11:17.753684   92071 start.go:293] postStartSetup for "ha-285481-m02" (driver="kvm2")
	I0315 23:11:17.753695   92071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:11:17.753712   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:17.753944   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:11:17.753972   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.756226   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.756613   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.756642   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.756786   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.756981   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.757162   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.757304   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:17.842063   92071 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:11:17.846629   92071 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:11:17.846661   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:11:17.846728   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:11:17.846829   92071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:11:17.846845   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:11:17.846956   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:11:17.856680   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:11:17.881785   92071 start.go:296] duration metric: took 128.084575ms for postStartSetup
	I0315 23:11:17.881854   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetConfigRaw
	I0315 23:11:17.882547   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:17.885243   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.885665   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.885692   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.885952   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:11:17.886166   92071 start.go:128] duration metric: took 25.201095975s to createHost
	I0315 23:11:17.886194   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:17.888268   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.888556   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.888602   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.888677   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:17.888866   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.889031   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:17.889154   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:17.889324   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:11:17.889533   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.201 22 <nil> <nil>}
	I0315 23:11:17.889547   92071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:11:17.996267   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544277.968770109
	
	I0315 23:11:17.996296   92071 fix.go:216] guest clock: 1710544277.968770109
	I0315 23:11:17.996306   92071 fix.go:229] Guest: 2024-03-15 23:11:17.968770109 +0000 UTC Remote: 2024-03-15 23:11:17.886181477 +0000 UTC m=+82.104509591 (delta=82.588632ms)
	I0315 23:11:17.996327   92071 fix.go:200] guest clock delta is within tolerance: 82.588632ms
	I0315 23:11:17.996333   92071 start.go:83] releasing machines lock for "ha-285481-m02", held for 25.311355257s
	I0315 23:11:17.996358   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:17.996698   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:17.999358   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:17.999729   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:17.999766   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.002384   92071 out.go:177] * Found network options:
	I0315 23:11:18.004023   92071 out.go:177]   - NO_PROXY=192.168.39.23
	W0315 23:11:18.005372   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:11:18.005420   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:18.005999   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:18.006203   92071 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:11:18.006325   92071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:11:18.006365   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	W0315 23:11:18.006366   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:11:18.006435   92071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:11:18.006457   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:11:18.009221   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009430   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009607   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:18.009634   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009779   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:18.009800   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:18.009805   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:18.009980   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:11:18.009980   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:18.010182   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:11:18.010198   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:18.010343   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:11:18.010377   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:18.010497   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:11:18.257359   92071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:11:18.264374   92071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:11:18.264477   92071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:11:18.281573   92071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 23:11:18.281608   92071 start.go:494] detecting cgroup driver to use...
	I0315 23:11:18.281676   92071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:11:18.303233   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:11:18.319295   92071 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:11:18.319372   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:11:18.335486   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:11:18.351237   92071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:11:18.467012   92071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:11:18.637365   92071 docker.go:233] disabling docker service ...
	I0315 23:11:18.637443   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:11:18.653001   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:11:18.667273   92071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:11:18.792614   92071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:11:18.913797   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:11:18.928846   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:11:18.948395   92071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:11:18.948474   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.960287   92071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:11:18.960376   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.972092   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.983557   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:11:18.995153   92071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:11:19.008066   92071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:11:19.018665   92071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 23:11:19.018736   92071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 23:11:19.033254   92071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:11:19.044086   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:11:19.166681   92071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:11:19.329472   92071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:11:19.329539   92071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:11:19.334805   92071 start.go:562] Will wait 60s for crictl version
	I0315 23:11:19.334851   92071 ssh_runner.go:195] Run: which crictl
	I0315 23:11:19.338846   92071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:11:19.380782   92071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:11:19.380874   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:11:19.409303   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:11:19.439910   92071 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:11:19.441369   92071 out.go:177]   - env NO_PROXY=192.168.39.23
	I0315 23:11:19.442697   92071 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:11:19.445455   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:19.445796   92071 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:11:07 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:11:19.445823   92071 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:11:19.446082   92071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:11:19.450598   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:11:19.463286   92071 mustload.go:65] Loading cluster: ha-285481
	I0315 23:11:19.463491   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:11:19.463787   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:11:19.463836   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:11:19.478653   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38171
	I0315 23:11:19.479071   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:11:19.479544   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:11:19.479564   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:11:19.479842   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:11:19.480019   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:11:19.481450   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:11:19.481740   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:11:19.481781   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:11:19.495824   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44729
	I0315 23:11:19.496342   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:11:19.496842   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:11:19.496872   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:11:19.497152   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:11:19.497332   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:11:19.497475   92071 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.201
	I0315 23:11:19.497485   92071 certs.go:194] generating shared ca certs ...
	I0315 23:11:19.497499   92071 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:11:19.497633   92071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:11:19.497677   92071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:11:19.497687   92071 certs.go:256] generating profile certs ...
	I0315 23:11:19.497794   92071 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:11:19.497820   92071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027
	I0315 23:11:19.497836   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.201 192.168.39.254]
	I0315 23:11:19.620686   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027 ...
	I0315 23:11:19.620718   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027: {Name:mk85afc9afc0cec0ea2b0d31c760805aa2a86c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:11:19.620908   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027 ...
	I0315 23:11:19.620926   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027: {Name:mke92e5c9595faada63f5a098b96c1719f9a5cac Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:11:19.621026   92071 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.c32bf027 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:11:19.621166   92071 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.c32bf027 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:11:19.621294   92071 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:11:19.621311   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:11:19.621324   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:11:19.621337   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:11:19.621348   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:11:19.621358   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:11:19.621368   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:11:19.621378   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:11:19.621388   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:11:19.621434   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:11:19.621462   92071 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:11:19.621472   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:11:19.621492   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:11:19.621512   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:11:19.621534   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:11:19.621572   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:11:19.621596   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:11:19.621610   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:19.621621   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:11:19.621654   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:11:19.624944   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:19.625520   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:11:19.625550   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:19.625780   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:11:19.625982   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:11:19.626150   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:11:19.626305   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:11:19.703773   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 23:11:19.708688   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 23:11:19.720860   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 23:11:19.724985   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 23:11:19.736231   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 23:11:19.740487   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 23:11:19.751401   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 23:11:19.755603   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0315 23:11:19.766660   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 23:11:19.770831   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 23:11:19.781384   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 23:11:19.785422   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0315 23:11:19.796815   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:11:19.825782   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:11:19.853084   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:11:19.882045   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:11:19.911348   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0315 23:11:19.938702   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:11:19.967987   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:11:19.993997   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:11:20.021482   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:11:20.048102   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:11:20.073693   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:11:20.099403   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 23:11:20.118548   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 23:11:20.136388   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 23:11:20.154057   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0315 23:11:20.171430   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 23:11:20.189279   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0315 23:11:20.207516   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 23:11:20.225059   92071 ssh_runner.go:195] Run: openssl version
	I0315 23:11:20.231114   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:11:20.242419   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:20.247132   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:20.247183   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:11:20.252977   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:11:20.263906   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:11:20.275213   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:11:20.280147   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:11:20.280236   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:11:20.286093   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:11:20.297326   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:11:20.308373   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:11:20.313081   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:11:20.313161   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:11:20.319170   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:11:20.330637   92071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:11:20.335223   92071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 23:11:20.335273   92071 kubeadm.go:928] updating node {m02 192.168.39.201 8443 v1.28.4 crio true true} ...
	I0315 23:11:20.335396   92071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.201
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:11:20.335430   92071 kube-vip.go:111] generating kube-vip config ...
	I0315 23:11:20.335468   92071 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:11:20.353067   92071 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:11:20.353156   92071 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:11:20.353217   92071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:11:20.364295   92071 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 23:11:20.364366   92071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 23:11:20.375447   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 23:11:20.375458   92071 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0315 23:11:20.375479   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:11:20.375491   92071 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0315 23:11:20.375554   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:11:20.380055   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 23:11:20.380092   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 23:11:21.023047   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:11:21.023129   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:11:21.030025   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 23:11:21.030055   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 23:11:21.506672   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:11:21.521518   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:11:21.521633   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:11:21.526227   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 23:11:21.526271   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 23:11:22.004288   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 23:11:22.014822   92071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0315 23:11:22.032874   92071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:11:22.050860   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:11:22.068725   92071 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:11:22.073004   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:11:22.087536   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:11:22.215082   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:11:22.233280   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:11:22.233700   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:11:22.233747   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:11:22.248515   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39507
	I0315 23:11:22.249034   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:11:22.249547   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:11:22.249569   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:11:22.249914   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:11:22.250127   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:11:22.250277   92071 start.go:316] joinCluster: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:11:22.250391   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 23:11:22.250417   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:11:22.253285   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:22.253726   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:11:22.253753   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:11:22.253883   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:11:22.254070   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:11:22.254241   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:11:22.254385   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:11:22.436047   92071 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:11:22.436101   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dp4yf.bvb9yd6ppxvzjirg --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m02 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443"
	I0315 23:12:03.809527   92071 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 2dp4yf.bvb9yd6ppxvzjirg --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m02 --control-plane --apiserver-advertise-address=192.168.39.201 --apiserver-bind-port=8443": (41.37339146s)
	I0315 23:12:03.809567   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 23:12:04.174017   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-285481-m02 minikube.k8s.io/updated_at=2024_03_15T23_12_04_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=ha-285481 minikube.k8s.io/primary=false
	I0315 23:12:04.310914   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-285481-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 23:12:04.449321   92071 start.go:318] duration metric: took 42.19903763s to joinCluster
	I0315 23:12:04.449408   92071 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:12:04.450702   92071 out.go:177] * Verifying Kubernetes components...
	I0315 23:12:04.449668   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:04.451848   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:12:04.649867   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:12:04.665221   92071 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:12:04.665503   92071 kapi.go:59] client config for ha-285481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt", KeyFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key", CAFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 23:12:04.665568   92071 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.23:8443
	I0315 23:12:04.665827   92071 node_ready.go:35] waiting up to 6m0s for node "ha-285481-m02" to be "Ready" ...
	I0315 23:12:04.665942   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:04.665953   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:04.665964   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:04.665970   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:04.675576   92071 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0315 23:12:05.166643   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:05.166666   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:05.166674   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:05.166678   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:05.169879   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:05.666721   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:05.666747   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:05.666759   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:05.666764   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:05.670743   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:06.166562   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:06.166596   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:06.166607   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:06.166612   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:06.170108   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:06.666130   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:06.666157   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:06.666170   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:06.666175   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:06.675669   92071 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0315 23:12:06.676389   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:07.166719   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:07.166739   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:07.166747   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:07.166751   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:07.170970   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:07.666691   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:07.666715   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:07.666722   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:07.666727   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:07.671505   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:08.166907   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:08.166935   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:08.166947   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:08.166952   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:08.171388   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:08.666107   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:08.666131   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:08.666141   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:08.666144   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:08.670576   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:09.166735   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:09.166762   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:09.166770   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:09.166775   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:09.170847   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:09.171386   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:09.666850   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:09.666874   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:09.666882   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:09.666886   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:09.670926   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:10.166964   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:10.166986   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:10.166994   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:10.166998   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:10.170606   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:10.666425   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:10.666449   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:10.666457   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:10.666460   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:10.671198   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:11.167034   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:11.167058   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:11.167065   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:11.167069   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:11.170974   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:11.171640   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:11.666253   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:11.666281   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:11.666293   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:11.666302   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:11.670535   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:12.166855   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:12.166883   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:12.166895   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:12.166900   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:12.170570   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:12.666128   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:12.666150   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:12.666158   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:12.666165   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:12.670101   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:13.166829   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:13.166854   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:13.166868   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:13.166873   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:13.172325   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:12:13.173011   92071 node_ready.go:53] node "ha-285481-m02" has status "Ready":"False"
	I0315 23:12:13.666393   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:13.666426   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:13.666436   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:13.666443   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:13.670340   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.166651   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.166695   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.166706   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.166710   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.172556   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:12:14.174054   92071 node_ready.go:49] node "ha-285481-m02" has status "Ready":"True"
	I0315 23:12:14.174073   92071 node_ready.go:38] duration metric: took 9.508228506s for node "ha-285481-m02" to be "Ready" ...
	I0315 23:12:14.174083   92071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:12:14.174169   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:14.174178   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.174185   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.174189   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.180352   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:12:14.186761   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.186876   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9c44k
	I0315 23:12:14.186887   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.186894   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.186900   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.190517   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.191402   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.191417   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.191425   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.191430   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.194391   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.195024   92071 pod_ready.go:92] pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.195047   92071 pod_ready.go:81] duration metric: took 8.253041ms for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.195059   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.195130   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qxtp4
	I0315 23:12:14.195139   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.195145   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.195149   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.197852   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.198531   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.198546   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.198557   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.198561   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.201010   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.201670   92071 pod_ready.go:92] pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.201689   92071 pod_ready.go:81] duration metric: took 6.618034ms for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.201697   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.201747   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481
	I0315 23:12:14.201754   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.201761   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.201769   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.204434   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:12:14.205136   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.205153   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.205161   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.205166   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.211147   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:12:14.211715   92071 pod_ready.go:92] pod "etcd-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.211740   92071 pod_ready.go:81] duration metric: took 10.032825ms for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.211753   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.211821   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m02
	I0315 23:12:14.211832   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.211841   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.211846   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.215218   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.215824   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.215842   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.215854   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.215863   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.219968   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:14.220791   92071 pod_ready.go:92] pod "etcd-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.220808   92071 pod_ready.go:81] duration metric: took 9.041234ms for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.220822   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.367255   92071 request.go:629] Waited for 146.342872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481
	I0315 23:12:14.367373   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481
	I0315 23:12:14.367384   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.367393   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.367400   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.371391   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.566974   92071 request.go:629] Waited for 194.834112ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.567030   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:14.567035   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.567043   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.567048   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.570570   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.571196   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.571218   92071 pod_ready.go:81] duration metric: took 350.387909ms for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.571230   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.767188   92071 request.go:629] Waited for 195.878941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m02
	I0315 23:12:14.767287   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m02
	I0315 23:12:14.767297   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.767307   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.767338   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.771442   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:14.966769   92071 request.go:629] Waited for 194.281826ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.966840   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:14.966848   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:14.966859   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:14.966870   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:14.970756   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:14.971339   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:14.971363   92071 pod_ready.go:81] duration metric: took 400.122734ms for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:14.971390   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.167483   92071 request.go:629] Waited for 196.015214ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481
	I0315 23:12:15.167544   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481
	I0315 23:12:15.167549   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.167570   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.167574   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.171380   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:15.367395   92071 request.go:629] Waited for 195.406578ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:15.367484   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:15.367497   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.367509   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.367515   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.371862   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:15.372529   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:15.372554   92071 pod_ready.go:81] duration metric: took 401.156298ms for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.372568   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.567597   92071 request.go:629] Waited for 194.940851ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:12:15.567675   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:12:15.567683   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.567691   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.567698   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.571236   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:15.767288   92071 request.go:629] Waited for 195.385555ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:15.767367   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:15.767376   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.767385   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.767391   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.771308   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:15.771765   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:15.771788   92071 pod_ready.go:81] duration metric: took 399.209045ms for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.771798   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:15.966700   92071 request.go:629] Waited for 194.821913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:12:15.966778   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:12:15.966787   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:15.966798   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:15.966806   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:15.971003   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.167303   92071 request.go:629] Waited for 195.440131ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:16.167398   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:16.167406   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.167414   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.167421   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.170798   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:16.171380   92071 pod_ready.go:92] pod "kube-proxy-2hcgt" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:16.171399   92071 pod_ready.go:81] duration metric: took 399.595276ms for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.171409   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.367474   92071 request.go:629] Waited for 195.988442ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:12:16.367558   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:12:16.367564   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.367572   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.367578   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.372027   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.567019   92071 request.go:629] Waited for 194.38252ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.567078   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.567083   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.567091   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.567094   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.571363   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.572462   92071 pod_ready.go:92] pod "kube-proxy-cml9m" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:16.572486   92071 pod_ready.go:81] duration metric: took 401.069342ms for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.572498   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.767682   92071 request.go:629] Waited for 195.091788ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:12:16.767759   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:12:16.767773   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.767785   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.767793   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.772564   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.966997   92071 request.go:629] Waited for 193.388496ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.967099   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:12:16.967131   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:16.967142   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:16.967147   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:16.971452   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:16.972250   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:16.972271   92071 pod_ready.go:81] duration metric: took 399.764452ms for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:16.972293   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:17.167453   92071 request.go:629] Waited for 195.048432ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:12:17.167534   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:12:17.167544   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.167552   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.167558   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.171521   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:17.367634   92071 request.go:629] Waited for 195.462259ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:17.367717   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:12:17.367722   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.367731   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.367735   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.372166   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:12:17.372789   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:12:17.372809   92071 pod_ready.go:81] duration metric: took 400.508205ms for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:12:17.372819   92071 pod_ready.go:38] duration metric: took 3.198702211s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:12:17.372838   92071 api_server.go:52] waiting for apiserver process to appear ...
	I0315 23:12:17.372911   92071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:12:17.389626   92071 api_server.go:72] duration metric: took 12.940172719s to wait for apiserver process to appear ...
	I0315 23:12:17.389657   92071 api_server.go:88] waiting for apiserver healthz status ...
	I0315 23:12:17.389693   92071 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I0315 23:12:17.395595   92071 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I0315 23:12:17.395688   92071 round_trippers.go:463] GET https://192.168.39.23:8443/version
	I0315 23:12:17.395700   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.395711   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.395720   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.396958   92071 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0315 23:12:17.397057   92071 api_server.go:141] control plane version: v1.28.4
	I0315 23:12:17.397084   92071 api_server.go:131] duration metric: took 7.413304ms to wait for apiserver health ...
	I0315 23:12:17.397098   92071 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 23:12:17.567501   92071 request.go:629] Waited for 170.339953ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.567587   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.567595   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.567608   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.567617   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.573734   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:12:17.578383   92071 system_pods.go:59] 17 kube-system pods found
	I0315 23:12:17.578411   92071 system_pods.go:61] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:12:17.578420   92071 system_pods.go:61] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:12:17.578424   92071 system_pods.go:61] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:12:17.578427   92071 system_pods.go:61] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:12:17.578430   92071 system_pods.go:61] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:12:17.578434   92071 system_pods.go:61] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:12:17.578437   92071 system_pods.go:61] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:12:17.578440   92071 system_pods.go:61] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:12:17.578444   92071 system_pods.go:61] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:12:17.578447   92071 system_pods.go:61] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:12:17.578450   92071 system_pods.go:61] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:12:17.578453   92071 system_pods.go:61] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:12:17.578456   92071 system_pods.go:61] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:12:17.578462   92071 system_pods.go:61] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:12:17.578465   92071 system_pods.go:61] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:12:17.578467   92071 system_pods.go:61] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:12:17.578470   92071 system_pods.go:61] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:12:17.578475   92071 system_pods.go:74] duration metric: took 181.371699ms to wait for pod list to return data ...
	I0315 23:12:17.578482   92071 default_sa.go:34] waiting for default service account to be created ...
	I0315 23:12:17.767033   92071 request.go:629] Waited for 188.478323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:12:17.767114   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:12:17.767120   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.767128   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.767134   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.771003   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:17.771244   92071 default_sa.go:45] found service account: "default"
	I0315 23:12:17.771265   92071 default_sa.go:55] duration metric: took 192.776688ms for default service account to be created ...
	I0315 23:12:17.771274   92071 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 23:12:17.967530   92071 request.go:629] Waited for 196.177413ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.967630   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:12:17.967638   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:17.967649   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:17.967657   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:17.973878   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:12:17.978474   92071 system_pods.go:86] 17 kube-system pods found
	I0315 23:12:17.978503   92071 system_pods.go:89] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:12:17.978508   92071 system_pods.go:89] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:12:17.978512   92071 system_pods.go:89] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:12:17.978517   92071 system_pods.go:89] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:12:17.978520   92071 system_pods.go:89] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:12:17.978525   92071 system_pods.go:89] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:12:17.978528   92071 system_pods.go:89] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:12:17.978532   92071 system_pods.go:89] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:12:17.978536   92071 system_pods.go:89] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:12:17.978540   92071 system_pods.go:89] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:12:17.978543   92071 system_pods.go:89] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:12:17.978547   92071 system_pods.go:89] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:12:17.978550   92071 system_pods.go:89] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:12:17.978554   92071 system_pods.go:89] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:12:17.978557   92071 system_pods.go:89] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:12:17.978562   92071 system_pods.go:89] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:12:17.978572   92071 system_pods.go:89] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:12:17.978581   92071 system_pods.go:126] duration metric: took 207.300967ms to wait for k8s-apps to be running ...
	I0315 23:12:17.978596   92071 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 23:12:17.978668   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:12:17.999786   92071 system_svc.go:56] duration metric: took 21.178532ms WaitForService to wait for kubelet
	I0315 23:12:17.999824   92071 kubeadm.go:576] duration metric: took 13.550375462s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:12:17.999850   92071 node_conditions.go:102] verifying NodePressure condition ...
	I0315 23:12:18.167354   92071 request.go:629] Waited for 167.370598ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes
	I0315 23:12:18.167423   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes
	I0315 23:12:18.167429   92071 round_trippers.go:469] Request Headers:
	I0315 23:12:18.167437   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:12:18.167465   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:12:18.171040   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:12:18.171798   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:12:18.171835   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:12:18.171847   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:12:18.171851   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:12:18.171855   92071 node_conditions.go:105] duration metric: took 172.000134ms to run NodePressure ...
	I0315 23:12:18.171866   92071 start.go:240] waiting for startup goroutines ...
	I0315 23:12:18.171895   92071 start.go:254] writing updated cluster config ...
	I0315 23:12:18.174181   92071 out.go:177] 
	I0315 23:12:18.175893   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:18.175991   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:12:18.177966   92071 out.go:177] * Starting "ha-285481-m03" control-plane node in "ha-285481" cluster
	I0315 23:12:18.179302   92071 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:12:18.179347   92071 cache.go:56] Caching tarball of preloaded images
	I0315 23:12:18.179455   92071 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:12:18.179468   92071 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:12:18.179573   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:12:18.179753   92071 start.go:360] acquireMachinesLock for ha-285481-m03: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:12:18.179794   92071 start.go:364] duration metric: took 22.965µs to acquireMachinesLock for "ha-285481-m03"
	I0315 23:12:18.179809   92071 start.go:93] Provisioning new machine with config: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-d
ns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:12:18.179909   92071 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0315 23:12:18.181483   92071 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:12:18.181569   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:18.181610   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:18.196579   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43035
	I0315 23:12:18.197065   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:18.197487   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:18.197505   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:18.197809   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:18.198018   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:18.198162   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:18.198324   92071 start.go:159] libmachine.API.Create for "ha-285481" (driver="kvm2")
	I0315 23:12:18.198351   92071 client.go:168] LocalClient.Create starting
	I0315 23:12:18.198387   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:12:18.198425   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:12:18.198443   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:12:18.198520   92071 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:12:18.198552   92071 main.go:141] libmachine: Decoding PEM data...
	I0315 23:12:18.198569   92071 main.go:141] libmachine: Parsing certificate...
	I0315 23:12:18.198602   92071 main.go:141] libmachine: Running pre-create checks...
	I0315 23:12:18.198611   92071 main.go:141] libmachine: (ha-285481-m03) Calling .PreCreateCheck
	I0315 23:12:18.198777   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetConfigRaw
	I0315 23:12:18.199124   92071 main.go:141] libmachine: Creating machine...
	I0315 23:12:18.199139   92071 main.go:141] libmachine: (ha-285481-m03) Calling .Create
	I0315 23:12:18.199244   92071 main.go:141] libmachine: (ha-285481-m03) Creating KVM machine...
	I0315 23:12:18.200608   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found existing default KVM network
	I0315 23:12:18.200730   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found existing private KVM network mk-ha-285481
	I0315 23:12:18.200842   92071 main.go:141] libmachine: (ha-285481-m03) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03 ...
	I0315 23:12:18.200863   92071 main.go:141] libmachine: (ha-285481-m03) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:12:18.200933   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.200832   92758 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:12:18.201063   92071 main.go:141] libmachine: (ha-285481-m03) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:12:18.433977   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.433846   92758 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa...
	I0315 23:12:18.667560   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.667404   92758 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/ha-285481-m03.rawdisk...
	I0315 23:12:18.667595   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Writing magic tar header
	I0315 23:12:18.667614   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Writing SSH key tar header
	I0315 23:12:18.667658   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:18.667518   92758 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03 ...
	I0315 23:12:18.667702   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03 (perms=drwx------)
	I0315 23:12:18.667736   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:12:18.667750   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03
	I0315 23:12:18.667771   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:12:18.667786   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:12:18.667802   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:12:18.667815   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:12:18.667830   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:12:18.667846   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:12:18.667861   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:12:18.667877   92071 main.go:141] libmachine: (ha-285481-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:12:18.667888   92071 main.go:141] libmachine: (ha-285481-m03) Creating domain...
	I0315 23:12:18.667898   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:12:18.667913   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Checking permissions on dir: /home
	I0315 23:12:18.667925   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Skipping /home - not owner
	I0315 23:12:18.668824   92071 main.go:141] libmachine: (ha-285481-m03) define libvirt domain using xml: 
	I0315 23:12:18.668838   92071 main.go:141] libmachine: (ha-285481-m03) <domain type='kvm'>
	I0315 23:12:18.668847   92071 main.go:141] libmachine: (ha-285481-m03)   <name>ha-285481-m03</name>
	I0315 23:12:18.668855   92071 main.go:141] libmachine: (ha-285481-m03)   <memory unit='MiB'>2200</memory>
	I0315 23:12:18.668864   92071 main.go:141] libmachine: (ha-285481-m03)   <vcpu>2</vcpu>
	I0315 23:12:18.668876   92071 main.go:141] libmachine: (ha-285481-m03)   <features>
	I0315 23:12:18.668888   92071 main.go:141] libmachine: (ha-285481-m03)     <acpi/>
	I0315 23:12:18.668899   92071 main.go:141] libmachine: (ha-285481-m03)     <apic/>
	I0315 23:12:18.668908   92071 main.go:141] libmachine: (ha-285481-m03)     <pae/>
	I0315 23:12:18.668919   92071 main.go:141] libmachine: (ha-285481-m03)     
	I0315 23:12:18.668932   92071 main.go:141] libmachine: (ha-285481-m03)   </features>
	I0315 23:12:18.668948   92071 main.go:141] libmachine: (ha-285481-m03)   <cpu mode='host-passthrough'>
	I0315 23:12:18.668960   92071 main.go:141] libmachine: (ha-285481-m03)   
	I0315 23:12:18.668976   92071 main.go:141] libmachine: (ha-285481-m03)   </cpu>
	I0315 23:12:18.668987   92071 main.go:141] libmachine: (ha-285481-m03)   <os>
	I0315 23:12:18.668993   92071 main.go:141] libmachine: (ha-285481-m03)     <type>hvm</type>
	I0315 23:12:18.669000   92071 main.go:141] libmachine: (ha-285481-m03)     <boot dev='cdrom'/>
	I0315 23:12:18.669007   92071 main.go:141] libmachine: (ha-285481-m03)     <boot dev='hd'/>
	I0315 23:12:18.669019   92071 main.go:141] libmachine: (ha-285481-m03)     <bootmenu enable='no'/>
	I0315 23:12:18.669050   92071 main.go:141] libmachine: (ha-285481-m03)   </os>
	I0315 23:12:18.669061   92071 main.go:141] libmachine: (ha-285481-m03)   <devices>
	I0315 23:12:18.669077   92071 main.go:141] libmachine: (ha-285481-m03)     <disk type='file' device='cdrom'>
	I0315 23:12:18.669096   92071 main.go:141] libmachine: (ha-285481-m03)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/boot2docker.iso'/>
	I0315 23:12:18.669134   92071 main.go:141] libmachine: (ha-285481-m03)       <target dev='hdc' bus='scsi'/>
	I0315 23:12:18.669158   92071 main.go:141] libmachine: (ha-285481-m03)       <readonly/>
	I0315 23:12:18.669172   92071 main.go:141] libmachine: (ha-285481-m03)     </disk>
	I0315 23:12:18.669179   92071 main.go:141] libmachine: (ha-285481-m03)     <disk type='file' device='disk'>
	I0315 23:12:18.669194   92071 main.go:141] libmachine: (ha-285481-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:12:18.669211   92071 main.go:141] libmachine: (ha-285481-m03)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/ha-285481-m03.rawdisk'/>
	I0315 23:12:18.669227   92071 main.go:141] libmachine: (ha-285481-m03)       <target dev='hda' bus='virtio'/>
	I0315 23:12:18.669237   92071 main.go:141] libmachine: (ha-285481-m03)     </disk>
	I0315 23:12:18.669247   92071 main.go:141] libmachine: (ha-285481-m03)     <interface type='network'>
	I0315 23:12:18.669254   92071 main.go:141] libmachine: (ha-285481-m03)       <source network='mk-ha-285481'/>
	I0315 23:12:18.669263   92071 main.go:141] libmachine: (ha-285481-m03)       <model type='virtio'/>
	I0315 23:12:18.669268   92071 main.go:141] libmachine: (ha-285481-m03)     </interface>
	I0315 23:12:18.669276   92071 main.go:141] libmachine: (ha-285481-m03)     <interface type='network'>
	I0315 23:12:18.669287   92071 main.go:141] libmachine: (ha-285481-m03)       <source network='default'/>
	I0315 23:12:18.669295   92071 main.go:141] libmachine: (ha-285481-m03)       <model type='virtio'/>
	I0315 23:12:18.669300   92071 main.go:141] libmachine: (ha-285481-m03)     </interface>
	I0315 23:12:18.669305   92071 main.go:141] libmachine: (ha-285481-m03)     <serial type='pty'>
	I0315 23:12:18.669315   92071 main.go:141] libmachine: (ha-285481-m03)       <target port='0'/>
	I0315 23:12:18.669321   92071 main.go:141] libmachine: (ha-285481-m03)     </serial>
	I0315 23:12:18.669328   92071 main.go:141] libmachine: (ha-285481-m03)     <console type='pty'>
	I0315 23:12:18.669334   92071 main.go:141] libmachine: (ha-285481-m03)       <target type='serial' port='0'/>
	I0315 23:12:18.669340   92071 main.go:141] libmachine: (ha-285481-m03)     </console>
	I0315 23:12:18.669348   92071 main.go:141] libmachine: (ha-285481-m03)     <rng model='virtio'>
	I0315 23:12:18.669359   92071 main.go:141] libmachine: (ha-285481-m03)       <backend model='random'>/dev/random</backend>
	I0315 23:12:18.669367   92071 main.go:141] libmachine: (ha-285481-m03)     </rng>
	I0315 23:12:18.669381   92071 main.go:141] libmachine: (ha-285481-m03)     
	I0315 23:12:18.669404   92071 main.go:141] libmachine: (ha-285481-m03)     
	I0315 23:12:18.669422   92071 main.go:141] libmachine: (ha-285481-m03)   </devices>
	I0315 23:12:18.669430   92071 main.go:141] libmachine: (ha-285481-m03) </domain>
	I0315 23:12:18.669436   92071 main.go:141] libmachine: (ha-285481-m03) 
	I0315 23:12:18.676587   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:c7:8a:c5 in network default
	I0315 23:12:18.677150   92071 main.go:141] libmachine: (ha-285481-m03) Ensuring networks are active...
	I0315 23:12:18.677176   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:18.677861   92071 main.go:141] libmachine: (ha-285481-m03) Ensuring network default is active
	I0315 23:12:18.678097   92071 main.go:141] libmachine: (ha-285481-m03) Ensuring network mk-ha-285481 is active
	I0315 23:12:18.678390   92071 main.go:141] libmachine: (ha-285481-m03) Getting domain xml...
	I0315 23:12:18.679036   92071 main.go:141] libmachine: (ha-285481-m03) Creating domain...
	I0315 23:12:19.886501   92071 main.go:141] libmachine: (ha-285481-m03) Waiting to get IP...
	I0315 23:12:19.887495   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:19.887955   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:19.887982   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:19.887945   92758 retry.go:31] will retry after 294.942371ms: waiting for machine to come up
	I0315 23:12:20.184463   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:20.184973   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:20.185007   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:20.184934   92758 retry.go:31] will retry after 259.466564ms: waiting for machine to come up
	I0315 23:12:20.446542   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:20.447077   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:20.447104   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:20.446959   92758 retry.go:31] will retry after 423.883268ms: waiting for machine to come up
	I0315 23:12:20.872523   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:20.873052   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:20.873088   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:20.872999   92758 retry.go:31] will retry after 457.642128ms: waiting for machine to come up
	I0315 23:12:21.332692   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:21.333166   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:21.333200   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:21.333122   92758 retry.go:31] will retry after 759.65704ms: waiting for machine to come up
	I0315 23:12:22.094047   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:22.094587   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:22.094619   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:22.094522   92758 retry.go:31] will retry after 574.549303ms: waiting for machine to come up
	I0315 23:12:22.670205   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:22.670568   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:22.670594   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:22.670542   92758 retry.go:31] will retry after 797.984979ms: waiting for machine to come up
	I0315 23:12:23.469946   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:23.470310   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:23.470337   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:23.470277   92758 retry.go:31] will retry after 914.454189ms: waiting for machine to come up
	I0315 23:12:24.386053   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:24.386565   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:24.386598   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:24.386509   92758 retry.go:31] will retry after 1.507342364s: waiting for machine to come up
	I0315 23:12:25.896079   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:25.896558   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:25.896580   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:25.896506   92758 retry.go:31] will retry after 1.601064693s: waiting for machine to come up
	I0315 23:12:27.500415   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:27.500952   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:27.500983   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:27.500886   92758 retry.go:31] will retry after 1.881993459s: waiting for machine to come up
	I0315 23:12:29.384401   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:29.384831   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:29.384858   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:29.384769   92758 retry.go:31] will retry after 3.438780484s: waiting for machine to come up
	I0315 23:12:32.826689   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:32.827175   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:32.827205   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:32.827134   92758 retry.go:31] will retry after 3.812719047s: waiting for machine to come up
	I0315 23:12:36.644227   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:36.644595   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find current IP address of domain ha-285481-m03 in network mk-ha-285481
	I0315 23:12:36.644610   92071 main.go:141] libmachine: (ha-285481-m03) DBG | I0315 23:12:36.644571   92758 retry.go:31] will retry after 5.124301462s: waiting for machine to come up
	I0315 23:12:41.772352   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.772777   92071 main.go:141] libmachine: (ha-285481-m03) Found IP for machine: 192.168.39.248
	I0315 23:12:41.772798   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has current primary IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.772803   92071 main.go:141] libmachine: (ha-285481-m03) Reserving static IP address...
	I0315 23:12:41.773209   92071 main.go:141] libmachine: (ha-285481-m03) DBG | unable to find host DHCP lease matching {name: "ha-285481-m03", mac: "52:54:00:2c:2e:06", ip: "192.168.39.248"} in network mk-ha-285481
	I0315 23:12:41.847840   92071 main.go:141] libmachine: (ha-285481-m03) Reserved static IP address: 192.168.39.248
	I0315 23:12:41.847869   92071 main.go:141] libmachine: (ha-285481-m03) Waiting for SSH to be available...
	I0315 23:12:41.847879   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Getting to WaitForSSH function...
	I0315 23:12:41.850500   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.850948   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:minikube Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:41.850984   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.851202   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Using SSH client type: external
	I0315 23:12:41.851239   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa (-rw-------)
	I0315 23:12:41.851269   92071 main.go:141] libmachine: (ha-285481-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.248 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0315 23:12:41.851288   92071 main.go:141] libmachine: (ha-285481-m03) DBG | About to run SSH command:
	I0315 23:12:41.851300   92071 main.go:141] libmachine: (ha-285481-m03) DBG | exit 0
	I0315 23:12:41.975575   92071 main.go:141] libmachine: (ha-285481-m03) DBG | SSH cmd err, output: <nil>: 
	I0315 23:12:41.975889   92071 main.go:141] libmachine: (ha-285481-m03) KVM machine creation complete!
	I0315 23:12:41.976173   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetConfigRaw
	I0315 23:12:41.976905   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:41.977130   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:41.977312   92071 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0315 23:12:41.977329   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:12:41.978677   92071 main.go:141] libmachine: Detecting operating system of created instance...
	I0315 23:12:41.978691   92071 main.go:141] libmachine: Waiting for SSH to be available...
	I0315 23:12:41.978697   92071 main.go:141] libmachine: Getting to WaitForSSH function...
	I0315 23:12:41.978706   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:41.980947   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.981287   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:41.981309   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:41.981416   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:41.981595   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:41.981752   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:41.981890   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:41.982082   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:41.982315   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:41.982326   92071 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0315 23:12:42.086831   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:12:42.086873   92071 main.go:141] libmachine: Detecting the provisioner...
	I0315 23:12:42.086885   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.089594   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.090029   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.090061   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.090193   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.090394   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.090536   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.090729   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.090934   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.091132   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.091145   92071 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0315 23:12:42.204339   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0315 23:12:42.204411   92071 main.go:141] libmachine: found compatible host: buildroot
	I0315 23:12:42.204426   92071 main.go:141] libmachine: Provisioning with buildroot...
	I0315 23:12:42.204453   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:42.204724   92071 buildroot.go:166] provisioning hostname "ha-285481-m03"
	I0315 23:12:42.204757   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:42.204958   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.207496   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.207839   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.207872   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.208028   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.208207   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.208341   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.208458   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.208649   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.208853   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.208872   92071 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481-m03 && echo "ha-285481-m03" | sudo tee /etc/hostname
	I0315 23:12:42.332039   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481-m03
	
	I0315 23:12:42.332063   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.335108   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.335524   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.335548   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.335719   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.335919   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.336109   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.336245   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.336434   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.336650   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.336670   92071 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:12:42.453567   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:12:42.453608   92071 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:12:42.453632   92071 buildroot.go:174] setting up certificates
	I0315 23:12:42.453688   92071 provision.go:84] configureAuth start
	I0315 23:12:42.453703   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetMachineName
	I0315 23:12:42.453989   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:42.456785   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.457200   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.457237   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.457343   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.459502   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.459827   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.459850   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.460039   92071 provision.go:143] copyHostCerts
	I0315 23:12:42.460072   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:12:42.460116   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:12:42.460129   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:12:42.460223   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:12:42.460359   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:12:42.460386   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:12:42.460396   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:12:42.460441   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:12:42.460515   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:12:42.460538   92071 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:12:42.460548   92071 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:12:42.460583   92071 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:12:42.460664   92071 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481-m03 san=[127.0.0.1 192.168.39.248 ha-285481-m03 localhost minikube]
	I0315 23:12:42.577057   92071 provision.go:177] copyRemoteCerts
	I0315 23:12:42.577137   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:12:42.577163   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.579866   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.580226   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.580258   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.580500   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.580737   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.580912   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.581055   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:42.670738   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:12:42.670838   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:12:42.700340   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:12:42.700452   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0315 23:12:42.727635   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:12:42.727733   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:12:42.755424   92071 provision.go:87] duration metric: took 301.717801ms to configureAuth
	I0315 23:12:42.755460   92071 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:12:42.755758   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:42.755860   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:42.758358   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.758739   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:42.758768   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:42.758970   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:42.759174   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.759359   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:42.759552   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:42.759722   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:42.759892   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:42.759907   92071 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:12:43.060428   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:12:43.060465   92071 main.go:141] libmachine: Checking connection to Docker...
	I0315 23:12:43.060477   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetURL
	I0315 23:12:43.061826   92071 main.go:141] libmachine: (ha-285481-m03) DBG | Using libvirt version 6000000
	I0315 23:12:43.064630   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.065109   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.065143   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.065324   92071 main.go:141] libmachine: Docker is up and running!
	I0315 23:12:43.065347   92071 main.go:141] libmachine: Reticulating splines...
	I0315 23:12:43.065356   92071 client.go:171] duration metric: took 24.866995333s to LocalClient.Create
	I0315 23:12:43.065385   92071 start.go:167] duration metric: took 24.867062069s to libmachine.API.Create "ha-285481"
	I0315 23:12:43.065397   92071 start.go:293] postStartSetup for "ha-285481-m03" (driver="kvm2")
	I0315 23:12:43.065410   92071 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:12:43.065432   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.065692   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:12:43.065726   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:43.067982   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.068366   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.068397   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.068508   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.068707   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.068884   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.069026   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:43.158841   92071 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:12:43.163346   92071 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:12:43.163375   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:12:43.163438   92071 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:12:43.163505   92071 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:12:43.163517   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:12:43.163599   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:12:43.174569   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:12:43.201998   92071 start.go:296] duration metric: took 136.583437ms for postStartSetup
	I0315 23:12:43.202064   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetConfigRaw
	I0315 23:12:43.202632   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:43.206085   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.206573   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.206606   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.206912   92071 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:12:43.207142   92071 start.go:128] duration metric: took 25.027219533s to createHost
	I0315 23:12:43.207171   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:43.209281   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.209601   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.209632   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.209789   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.209987   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.210182   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.210342   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.210489   92071 main.go:141] libmachine: Using SSH client type: native
	I0315 23:12:43.210684   92071 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.248 22 <nil> <nil>}
	I0315 23:12:43.210700   92071 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:12:43.324346   92071 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544363.307920175
	
	I0315 23:12:43.324376   92071 fix.go:216] guest clock: 1710544363.307920175
	I0315 23:12:43.324387   92071 fix.go:229] Guest: 2024-03-15 23:12:43.307920175 +0000 UTC Remote: 2024-03-15 23:12:43.207158104 +0000 UTC m=+167.425486209 (delta=100.762071ms)
	I0315 23:12:43.324408   92071 fix.go:200] guest clock delta is within tolerance: 100.762071ms
	I0315 23:12:43.324415   92071 start.go:83] releasing machines lock for "ha-285481-m03", held for 25.144613516s
	I0315 23:12:43.324441   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.324747   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:43.327799   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.328213   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.328238   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.331426   92071 out.go:177] * Found network options:
	I0315 23:12:43.333077   92071 out.go:177]   - NO_PROXY=192.168.39.23,192.168.39.201
	W0315 23:12:43.334556   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 23:12:43.334575   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:12:43.334592   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.335192   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.335431   92071 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:12:43.335551   92071 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:12:43.335592   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	W0315 23:12:43.335622   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	W0315 23:12:43.335646   92071 proxy.go:119] fail to check proxy env: Error ip not in block
	I0315 23:12:43.335724   92071 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:12:43.335751   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:12:43.338519   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.338731   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.338948   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.338990   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.339179   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:43.339212   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:43.339251   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.339452   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:12:43.339491   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.339665   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:12:43.339686   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.339791   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:12:43.339960   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:43.339971   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:12:43.577337   92071 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:12:43.584594   92071 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:12:43.584660   92071 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:12:43.603148   92071 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0315 23:12:43.603176   92071 start.go:494] detecting cgroup driver to use...
	I0315 23:12:43.603254   92071 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:12:43.620843   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:12:43.635416   92071 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:12:43.635492   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:12:43.650382   92071 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:12:43.664227   92071 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:12:43.795432   92071 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:12:43.967211   92071 docker.go:233] disabling docker service ...
	I0315 23:12:43.967298   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:12:43.984855   92071 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:12:43.999121   92071 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:12:44.127393   92071 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:12:44.257425   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:12:44.273058   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:12:44.292407   92071 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:12:44.292480   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.303136   92071 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:12:44.303205   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.314595   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.326671   92071 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:12:44.338689   92071 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:12:44.350004   92071 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:12:44.360073   92071 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0315 23:12:44.360137   92071 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0315 23:12:44.374552   92071 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:12:44.386155   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:12:44.527162   92071 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:12:44.675315   92071 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:12:44.675420   92071 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:12:44.680423   92071 start.go:562] Will wait 60s for crictl version
	I0315 23:12:44.680486   92071 ssh_runner.go:195] Run: which crictl
	I0315 23:12:44.684546   92071 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:12:44.722943   92071 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:12:44.723021   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:12:44.755906   92071 ssh_runner.go:195] Run: crio --version
	I0315 23:12:44.792822   92071 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:12:44.794529   92071 out.go:177]   - env NO_PROXY=192.168.39.23
	I0315 23:12:44.796178   92071 out.go:177]   - env NO_PROXY=192.168.39.23,192.168.39.201
	I0315 23:12:44.797748   92071 main.go:141] libmachine: (ha-285481-m03) Calling .GetIP
	I0315 23:12:44.800798   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:44.801303   92071 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:12:44.801335   92071 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:12:44.801533   92071 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:12:44.806349   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:12:44.821209   92071 mustload.go:65] Loading cluster: ha-285481
	I0315 23:12:44.821473   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:12:44.821790   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:44.821857   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:44.836729   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35509
	I0315 23:12:44.837210   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:44.837715   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:44.837737   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:44.838055   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:44.838223   92071 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:12:44.839734   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:12:44.840018   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:44.840056   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:44.854233   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38299
	I0315 23:12:44.854722   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:44.855190   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:44.855237   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:44.855612   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:44.855825   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:12:44.856014   92071 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.248
	I0315 23:12:44.856026   92071 certs.go:194] generating shared ca certs ...
	I0315 23:12:44.856045   92071 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:12:44.856177   92071 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:12:44.856221   92071 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:12:44.856237   92071 certs.go:256] generating profile certs ...
	I0315 23:12:44.856303   92071 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:12:44.856327   92071 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee
	I0315 23:12:44.856341   92071 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.201 192.168.39.248 192.168.39.254]
	I0315 23:12:45.085122   92071 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee ...
	I0315 23:12:45.085159   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee: {Name:mk207d01a1ed1f040cd6a8eb5e410f01a685be92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:12:45.085339   92071 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee ...
	I0315 23:12:45.085352   92071 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee: {Name:mkd9e113f45cebab606d7ca0da3b1251ca4d3330 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:12:45.085431   92071 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.267b8bee -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:12:45.085559   92071 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.267b8bee -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:12:45.085708   92071 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:12:45.085728   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:12:45.085745   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:12:45.085765   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:12:45.085782   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:12:45.085796   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:12:45.085812   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:12:45.085824   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:12:45.085839   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:12:45.085901   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:12:45.085942   92071 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:12:45.085957   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:12:45.085987   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:12:45.086012   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:12:45.086044   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:12:45.086099   92071 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:12:45.086137   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.086164   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.086183   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.086222   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:12:45.089898   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:45.090358   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:12:45.090379   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:45.090583   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:12:45.090785   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:12:45.091000   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:12:45.091172   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:12:45.163734   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0315 23:12:45.169808   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0315 23:12:45.183848   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0315 23:12:45.188463   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0315 23:12:45.200689   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0315 23:12:45.205131   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0315 23:12:45.216757   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0315 23:12:45.221251   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0315 23:12:45.233959   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0315 23:12:45.238897   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0315 23:12:45.251083   92071 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0315 23:12:45.255533   92071 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0315 23:12:45.266981   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:12:45.294449   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:12:45.320923   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:12:45.346156   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:12:45.371713   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0315 23:12:45.398008   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0315 23:12:45.426230   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:12:45.452890   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:12:45.480419   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:12:45.505969   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:12:45.532200   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:12:45.559949   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0315 23:12:45.577560   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0315 23:12:45.596317   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0315 23:12:45.615105   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0315 23:12:45.633463   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0315 23:12:45.651378   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0315 23:12:45.669053   92071 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0315 23:12:45.687106   92071 ssh_runner.go:195] Run: openssl version
	I0315 23:12:45.693315   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:12:45.705279   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.710090   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.710152   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:12:45.716237   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:12:45.728450   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:12:45.740118   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.745248   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.745304   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:12:45.751304   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:12:45.762370   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:12:45.776861   92071 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.782531   92071 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.782602   92071 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:12:45.788994   92071 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:12:45.800920   92071 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:12:45.805284   92071 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0315 23:12:45.805343   92071 kubeadm.go:928] updating node {m03 192.168.39.248 8443 v1.28.4 crio true true} ...
	I0315 23:12:45.805445   92071 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.248
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:12:45.805471   92071 kube-vip.go:111] generating kube-vip config ...
	I0315 23:12:45.805509   92071 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:12:45.824239   92071 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:12:45.824321   92071 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:12:45.824387   92071 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:12:45.835730   92071 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0315 23:12:45.835805   92071 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0315 23:12:45.846693   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0315 23:12:45.846730   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0315 23:12:45.846745   92071 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0315 23:12:45.846753   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:12:45.846754   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:12:45.846760   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:12:45.846821   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0315 23:12:45.846829   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0315 23:12:45.866657   92071 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:12:45.866728   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0315 23:12:45.866758   92071 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0315 23:12:45.866762   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0315 23:12:45.866760   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0315 23:12:45.866788   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0315 23:12:45.902504   92071 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0315 23:12:45.902549   92071 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0315 23:12:46.883210   92071 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0315 23:12:46.893254   92071 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0315 23:12:46.911175   92071 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:12:46.929189   92071 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:12:46.946418   92071 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:12:46.950656   92071 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0315 23:12:46.964628   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:12:47.081272   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:12:47.101634   92071 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:12:47.102110   92071 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:12:47.102165   92071 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:12:47.117619   92071 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40953
	I0315 23:12:47.118076   92071 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:12:47.118683   92071 main.go:141] libmachine: Using API Version  1
	I0315 23:12:47.118716   92071 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:12:47.119093   92071 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:12:47.119348   92071 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:12:47.119517   92071 start.go:316] joinCluster: &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:12:47.119690   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0315 23:12:47.119715   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:12:47.123547   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:47.124035   92071 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:12:47.124062   92071 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:12:47.124249   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:12:47.124454   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:12:47.124654   92071 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:12:47.124849   92071 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:12:47.298363   92071 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:12:47.298421   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token om5jb8.iwf3rk95i3babp1m --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m03 --control-plane --apiserver-advertise-address=192.168.39.248 --apiserver-bind-port=8443"
	I0315 23:13:13.853139   92071 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token om5jb8.iwf3rk95i3babp1m --discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-285481-m03 --control-plane --apiserver-advertise-address=192.168.39.248 --apiserver-bind-port=8443": (26.554684226s)
	I0315 23:13:13.853180   92071 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0315 23:13:14.399489   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-285481-m03 minikube.k8s.io/updated_at=2024_03_15T23_13_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=ha-285481 minikube.k8s.io/primary=false
	I0315 23:13:14.530391   92071 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-285481-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0315 23:13:14.693280   92071 start.go:318] duration metric: took 27.573759647s to joinCluster
	I0315 23:13:14.693371   92071 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:13:14.694896   92071 out.go:177] * Verifying Kubernetes components...
	I0315 23:13:14.693881   92071 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:13:14.696469   92071 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:13:14.884831   92071 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:13:14.904693   92071 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:13:14.904986   92071 kapi.go:59] client config for ha-285481: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.crt", KeyFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key", CAFile:"/home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c57a40), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0315 23:13:14.905060   92071 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.23:8443
	I0315 23:13:14.905270   92071 node_ready.go:35] waiting up to 6m0s for node "ha-285481-m03" to be "Ready" ...
	I0315 23:13:14.905367   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:14.905374   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:14.905382   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:14.905387   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:14.909792   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:15.406319   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:15.406340   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:15.406347   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:15.406351   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:15.416402   92071 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 23:13:15.906127   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:15.906172   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:15.906187   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:15.906192   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:15.916700   92071 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0315 23:13:16.406323   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:16.406343   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:16.406351   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:16.406356   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:16.410832   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:16.905550   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:16.905573   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:16.905581   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:16.905586   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:16.912053   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:16.912931   92071 node_ready.go:53] node "ha-285481-m03" has status "Ready":"False"
	I0315 23:13:17.406253   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:17.406280   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:17.406291   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:17.406299   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:17.412392   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:17.905680   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:17.905708   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:17.905720   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:17.905726   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:17.909546   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:18.406047   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:18.406068   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:18.406076   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:18.406081   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:18.409875   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:18.905910   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:18.905938   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:18.905952   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:18.905957   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:18.909851   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:19.405728   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:19.405751   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:19.405759   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:19.405764   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:19.410731   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:19.411392   92071 node_ready.go:53] node "ha-285481-m03" has status "Ready":"False"
	I0315 23:13:19.905626   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:19.905652   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:19.905660   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:19.905665   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:19.910162   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:20.405798   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:20.405825   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:20.405848   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:20.405854   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:20.409891   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:20.906021   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:20.906051   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:20.906060   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:20.906064   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:20.912040   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:21.405744   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:21.405770   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.405781   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.405786   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.410282   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:21.410912   92071 node_ready.go:49] node "ha-285481-m03" has status "Ready":"True"
	I0315 23:13:21.410928   92071 node_ready.go:38] duration metric: took 6.505644255s for node "ha-285481-m03" to be "Ready" ...
	I0315 23:13:21.410937   92071 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:13:21.410997   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:21.411006   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.411013   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.411018   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.425810   92071 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0315 23:13:21.433857   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.433944   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-9c44k
	I0315 23:13:21.433952   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.433960   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.433966   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.437935   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.438523   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:21.438538   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.438545   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.438549   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.442057   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.442526   92071 pod_ready.go:92] pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.442544   92071 pod_ready.go:81] duration metric: took 8.662575ms for pod "coredns-5dd5756b68-9c44k" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.442553   92071 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.442615   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-qxtp4
	I0315 23:13:21.442623   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.442629   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.442633   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.446399   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.447284   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:21.447308   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.447336   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.447346   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.450462   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.451291   92071 pod_ready.go:92] pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.451314   92071 pod_ready.go:81] duration metric: took 8.75368ms for pod "coredns-5dd5756b68-qxtp4" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.451350   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.451430   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481
	I0315 23:13:21.451439   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.451446   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.451449   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.454754   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.455344   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:21.455362   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.455373   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.455379   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.459946   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:21.460422   92071 pod_ready.go:92] pod "etcd-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.460444   92071 pod_ready.go:81] duration metric: took 9.081809ms for pod "etcd-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.460457   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.460534   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m02
	I0315 23:13:21.460545   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.460555   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.460562   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.464643   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:21.465631   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:21.465649   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.465659   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.465664   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.468753   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.469214   92071 pod_ready.go:92] pod "etcd-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:21.469230   92071 pod_ready.go:81] duration metric: took 8.765821ms for pod "etcd-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.469239   92071 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:21.606680   92071 request.go:629] Waited for 137.362155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:21.606755   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:21.606763   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.606771   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.606777   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.610339   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:21.806487   92071 request.go:629] Waited for 195.392779ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:21.806569   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:21.806578   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:21.806585   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:21.806589   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:21.810198   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:22.006461   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:22.006488   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.006499   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.006507   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.012164   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:22.206607   92071 request.go:629] Waited for 192.345896ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.206666   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.206671   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.206679   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.206688   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.210395   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:22.469813   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:22.469845   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.469857   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.469862   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.481010   92071 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0315 23:13:22.605900   92071 request.go:629] Waited for 124.238103ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.605977   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:22.605985   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.605995   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.606002   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.609903   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:22.970138   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:22.970163   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:22.970174   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:22.970179   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:22.974094   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.006024   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:23.006047   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.006056   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.006062   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.009489   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.470182   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:23.470205   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.470212   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.470216   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.474202   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.475138   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:23.475155   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.475162   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.475166   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.478201   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:23.478895   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:23.970179   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:23.970202   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.970210   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.970213   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.974695   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:23.975281   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:23.975296   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:23.975304   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:23.975308   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:23.978264   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:24.469945   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:24.469975   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.469987   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.469993   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.475144   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:24.475859   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:24.475877   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.475887   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.475890   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.479859   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:24.970199   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:24.970224   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.970233   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.970238   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.973947   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:24.974851   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:24.974866   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:24.974873   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:24.974876   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:24.978718   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:25.469505   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:25.469530   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.469540   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.469546   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.474138   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:25.474744   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:25.474762   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.474773   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.474780   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.490305   92071 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0315 23:13:25.491799   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:25.969880   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:25.969904   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.969913   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.969916   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.973908   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:25.974609   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:25.974624   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:25.974634   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:25.974641   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:25.977966   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:26.469864   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:26.469889   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.469898   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.469903   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.475271   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:26.475978   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:26.475999   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.476010   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.476019   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.479931   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:26.969877   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:26.969902   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.969911   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.969915   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.973743   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:26.974506   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:26.974517   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:26.974525   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:26.974529   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:26.977528   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:27.469628   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:27.469659   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.469670   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.469677   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.474113   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:27.474830   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:27.474847   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.474854   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.474858   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.477892   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:27.970275   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:27.970298   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.970305   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.970308   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.974859   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:27.975700   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:27.975718   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:27.975725   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:27.975730   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:27.979337   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:27.979923   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:28.469657   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:28.469684   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.469694   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.469701   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.473613   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:28.474430   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:28.474454   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.474464   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.474471   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.477714   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:28.969713   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:28.969734   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.969743   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.969746   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.976616   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:28.977487   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:28.977505   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:28.977511   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:28.977516   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:28.981200   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:29.469819   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:29.469847   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.469860   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.469866   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.474255   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:29.475160   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:29.475174   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.475185   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.475192   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.478495   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:29.970193   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:29.970220   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.970231   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.970236   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.974520   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:29.975072   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:29.975087   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:29.975097   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:29.975104   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:29.978480   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:30.470360   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:30.470382   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.470391   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.470396   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.474753   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:30.475643   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:30.475660   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.475671   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.475677   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.478940   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:30.479779   92071 pod_ready.go:102] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"False"
	I0315 23:13:30.969664   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:30.969687   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.969695   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.969701   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.973628   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:30.974296   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:30.974312   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:30.974319   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:30.974324   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:30.977371   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.470449   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/etcd-ha-285481-m03
	I0315 23:13:31.470470   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.470479   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.470483   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.474456   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.475443   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:31.475462   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.475471   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.475476   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.478883   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.479473   92071 pod_ready.go:92] pod "etcd-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.479494   92071 pod_ready.go:81] duration metric: took 10.010248114s for pod "etcd-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.479512   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.479580   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481
	I0315 23:13:31.479590   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.479597   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.479601   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.482501   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:31.483380   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:31.483395   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.483405   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.483410   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.487247   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.487889   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.487909   92071 pod_ready.go:81] duration metric: took 8.390404ms for pod "kube-apiserver-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.487918   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.487970   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m02
	I0315 23:13:31.487978   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.487985   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.487990   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.490837   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:31.491425   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:31.491438   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.491448   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.491458   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.494944   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.495674   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.495697   92071 pod_ready.go:81] duration metric: took 7.770928ms for pod "kube-apiserver-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.495709   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.495763   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-285481-m03
	I0315 23:13:31.495774   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.495784   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.495790   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.499723   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.500755   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:31.500768   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.500775   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.500779   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.503461   92071 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0315 23:13:31.503843   92071 pod_ready.go:92] pod "kube-apiserver-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.503861   92071 pod_ready.go:81] duration metric: took 8.14488ms for pod "kube-apiserver-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.503869   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.503940   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481
	I0315 23:13:31.503953   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.503963   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.503973   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.508754   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:31.509610   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:31.509623   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.509629   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.509632   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.513069   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.513615   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.513634   92071 pod_ready.go:81] duration metric: took 9.75855ms for pod "kube-controller-manager-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.513643   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.671055   92071 request.go:629] Waited for 157.312221ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:13:31.671128   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m02
	I0315 23:13:31.671136   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.671146   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.671159   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.675746   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:31.870935   92071 request.go:629] Waited for 194.363099ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:31.871014   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:31.871021   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:31.871029   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:31.871037   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:31.874726   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:31.875426   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:31.875444   92071 pod_ready.go:81] duration metric: took 361.795409ms for pod "kube-controller-manager-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:31.875455   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.070769   92071 request.go:629] Waited for 195.244188ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m03
	I0315 23:13:32.070861   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-285481-m03
	I0315 23:13:32.070873   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.070886   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.070897   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.074571   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:32.270812   92071 request.go:629] Waited for 195.382785ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:32.270876   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:32.270881   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.270890   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.270897   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.276048   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:32.276742   92071 pod_ready.go:92] pod "kube-controller-manager-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:32.276760   92071 pod_ready.go:81] duration metric: took 401.298691ms for pod "kube-controller-manager-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.276770   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.470907   92071 request.go:629] Waited for 194.045862ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:13:32.470965   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2hcgt
	I0315 23:13:32.470971   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.470978   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.470983   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.474865   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:32.671180   92071 request.go:629] Waited for 195.397732ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:32.671277   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:32.671284   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.671291   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.671295   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.675279   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:32.676042   92071 pod_ready.go:92] pod "kube-proxy-2hcgt" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:32.676066   92071 pod_ready.go:81] duration metric: took 399.288159ms for pod "kube-proxy-2hcgt" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.676092   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:32.871360   92071 request.go:629] Waited for 195.149955ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:13:32.871443   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-cml9m
	I0315 23:13:32.871458   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:32.871467   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:32.871478   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:32.875046   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.070617   92071 request.go:629] Waited for 194.285892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.070680   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.070687   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.070696   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.070706   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.074257   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.074924   92071 pod_ready.go:92] pod "kube-proxy-cml9m" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:33.074939   92071 pod_ready.go:81] duration metric: took 398.836861ms for pod "kube-proxy-cml9m" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.074950   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-d2fjd" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.270679   92071 request.go:629] Waited for 195.647313ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d2fjd
	I0315 23:13:33.270751   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-proxy-d2fjd
	I0315 23:13:33.270763   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.270770   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.270775   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.275149   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:33.471378   92071 request.go:629] Waited for 195.368272ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:33.471438   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:33.471443   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.471450   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.471455   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.475163   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.475904   92071 pod_ready.go:92] pod "kube-proxy-d2fjd" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:33.475925   92071 pod_ready.go:81] duration metric: took 400.969148ms for pod "kube-proxy-d2fjd" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.475938   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.671012   92071 request.go:629] Waited for 194.984048ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:13:33.671071   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481
	I0315 23:13:33.671081   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.671089   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.671094   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.674657   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.870607   92071 request.go:629] Waited for 195.29247ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.870671   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481
	I0315 23:13:33.870676   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:33.870684   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:33.870691   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:33.874711   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:33.875378   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:33.875397   92071 pod_ready.go:81] duration metric: took 399.450919ms for pod "kube-scheduler-ha-285481" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:33.875408   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.071414   92071 request.go:629] Waited for 195.915601ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:13:34.071495   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m02
	I0315 23:13:34.071501   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.071508   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.071513   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.075507   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:34.270517   92071 request.go:629] Waited for 194.285622ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:34.270580   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m02
	I0315 23:13:34.270585   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.270594   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.270602   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.275216   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:34.277040   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:34.277067   92071 pod_ready.go:81] duration metric: took 401.647615ms for pod "kube-scheduler-ha-285481-m02" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.277081   92071 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.471099   92071 request.go:629] Waited for 193.936989ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m03
	I0315 23:13:34.471201   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-285481-m03
	I0315 23:13:34.471213   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.471224   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.471234   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.474997   92071 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0315 23:13:34.670683   92071 request.go:629] Waited for 194.792633ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:34.670761   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes/ha-285481-m03
	I0315 23:13:34.670766   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.670774   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.670778   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.674900   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:34.675780   92071 pod_ready.go:92] pod "kube-scheduler-ha-285481-m03" in "kube-system" namespace has status "Ready":"True"
	I0315 23:13:34.675801   92071 pod_ready.go:81] duration metric: took 398.711684ms for pod "kube-scheduler-ha-285481-m03" in "kube-system" namespace to be "Ready" ...
	I0315 23:13:34.675816   92071 pod_ready.go:38] duration metric: took 13.264863296s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0315 23:13:34.675836   92071 api_server.go:52] waiting for apiserver process to appear ...
	I0315 23:13:34.675902   92071 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:13:34.693422   92071 api_server.go:72] duration metric: took 20.000003233s to wait for apiserver process to appear ...
	I0315 23:13:34.693457   92071 api_server.go:88] waiting for apiserver healthz status ...
	I0315 23:13:34.693481   92071 api_server.go:253] Checking apiserver healthz at https://192.168.39.23:8443/healthz ...
	I0315 23:13:34.698575   92071 api_server.go:279] https://192.168.39.23:8443/healthz returned 200:
	ok
	I0315 23:13:34.698681   92071 round_trippers.go:463] GET https://192.168.39.23:8443/version
	I0315 23:13:34.698693   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.698703   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.698714   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.699919   92071 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0315 23:13:34.699980   92071 api_server.go:141] control plane version: v1.28.4
	I0315 23:13:34.699995   92071 api_server.go:131] duration metric: took 6.532004ms to wait for apiserver health ...
	I0315 23:13:34.700006   92071 system_pods.go:43] waiting for kube-system pods to appear ...
	I0315 23:13:34.871282   92071 request.go:629] Waited for 171.202632ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:34.871380   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:34.871389   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:34.871397   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:34.871404   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:34.879795   92071 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0315 23:13:34.885854   92071 system_pods.go:59] 24 kube-system pods found
	I0315 23:13:34.885883   92071 system_pods.go:61] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:13:34.885889   92071 system_pods.go:61] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:13:34.885892   92071 system_pods.go:61] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:13:34.885896   92071 system_pods.go:61] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:13:34.885899   92071 system_pods.go:61] "etcd-ha-285481-m03" [675ae74e-7e71-4fee-b8c1-b6b757a95643] Running
	I0315 23:13:34.885902   92071 system_pods.go:61] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:13:34.885904   92071 system_pods.go:61] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:13:34.885907   92071 system_pods.go:61] "kindnet-zptcr" [901a115d-b255-473b-8f60-236d2bead302] Running
	I0315 23:13:34.885911   92071 system_pods.go:61] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:13:34.885914   92071 system_pods.go:61] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:13:34.885917   92071 system_pods.go:61] "kube-apiserver-ha-285481-m03" [1bf2f928-6d7b-4b8a-bcb9-8f0120766edf] Running
	I0315 23:13:34.885920   92071 system_pods.go:61] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:13:34.885927   92071 system_pods.go:61] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:13:34.885930   92071 system_pods.go:61] "kube-controller-manager-ha-285481-m03" [974871d4-bf77-48d6-b5b0-2315381e40f0] Running
	I0315 23:13:34.885933   92071 system_pods.go:61] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:13:34.885935   92071 system_pods.go:61] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:13:34.885940   92071 system_pods.go:61] "kube-proxy-d2fjd" [d2fc9b42-7c35-4472-a8de-4f5dafe9d208] Running
	I0315 23:13:34.885943   92071 system_pods.go:61] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:13:34.885946   92071 system_pods.go:61] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:13:34.885948   92071 system_pods.go:61] "kube-scheduler-ha-285481-m03" [5b522c93-1875-436c-a84f-7a71b1a694f6] Running
	I0315 23:13:34.885951   92071 system_pods.go:61] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:13:34.885954   92071 system_pods.go:61] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:13:34.885957   92071 system_pods.go:61] "kube-vip-ha-285481-m03" [c73a666c-3bb1-4b8c-becc-574021feab19] Running
	I0315 23:13:34.885960   92071 system_pods.go:61] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:13:34.885966   92071 system_pods.go:74] duration metric: took 185.952256ms to wait for pod list to return data ...
	I0315 23:13:34.885979   92071 default_sa.go:34] waiting for default service account to be created ...
	I0315 23:13:35.071406   92071 request.go:629] Waited for 185.343202ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:13:35.071478   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/default/serviceaccounts
	I0315 23:13:35.071485   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:35.071494   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:35.071503   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:35.075603   92071 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0315 23:13:35.075754   92071 default_sa.go:45] found service account: "default"
	I0315 23:13:35.075772   92071 default_sa.go:55] duration metric: took 189.785271ms for default service account to be created ...
	I0315 23:13:35.075799   92071 system_pods.go:116] waiting for k8s-apps to be running ...
	I0315 23:13:35.271363   92071 request.go:629] Waited for 195.456894ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:35.271436   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/namespaces/kube-system/pods
	I0315 23:13:35.271443   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:35.271453   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:35.271470   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:35.278347   92071 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0315 23:13:35.284755   92071 system_pods.go:86] 24 kube-system pods found
	I0315 23:13:35.284786   92071 system_pods.go:89] "coredns-5dd5756b68-9c44k" [52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e] Running
	I0315 23:13:35.284792   92071 system_pods.go:89] "coredns-5dd5756b68-qxtp4" [f713da8e-df53-4299-9b3c-8390bc69a077] Running
	I0315 23:13:35.284796   92071 system_pods.go:89] "etcd-ha-285481" [caac6ddf-80d0-4019-9ecf-f72f94c2aa96] Running
	I0315 23:13:35.284800   92071 system_pods.go:89] "etcd-ha-285481-m02" [32786ec3-85ef-4ce3-af16-48644cf0799d] Running
	I0315 23:13:35.284804   92071 system_pods.go:89] "etcd-ha-285481-m03" [675ae74e-7e71-4fee-b8c1-b6b757a95643] Running
	I0315 23:13:35.284808   92071 system_pods.go:89] "kindnet-9fd6f" [bfce84cd-8517-4081-bd7d-a32f21e4b5ad] Running
	I0315 23:13:35.284812   92071 system_pods.go:89] "kindnet-pnxpk" [7e1f44d6-db0f-4c19-8b34-7f3e53e51886] Running
	I0315 23:13:35.284816   92071 system_pods.go:89] "kindnet-zptcr" [901a115d-b255-473b-8f60-236d2bead302] Running
	I0315 23:13:35.284820   92071 system_pods.go:89] "kube-apiserver-ha-285481" [f4cd4c32-ba4f-421c-8909-0ac03a470a3d] Running
	I0315 23:13:35.284824   92071 system_pods.go:89] "kube-apiserver-ha-285481-m02" [81d652ed-3df4-401c-82d3-f944a67b673e] Running
	I0315 23:13:35.284828   92071 system_pods.go:89] "kube-apiserver-ha-285481-m03" [1bf2f928-6d7b-4b8a-bcb9-8f0120766edf] Running
	I0315 23:13:35.284833   92071 system_pods.go:89] "kube-controller-manager-ha-285481" [e0a59a53-c361-4507-bb3c-32a6227c451f] Running
	I0315 23:13:35.284837   92071 system_pods.go:89] "kube-controller-manager-ha-285481-m02" [e52cac2f-bc75-4d27-a259-ac988c44e363] Running
	I0315 23:13:35.284842   92071 system_pods.go:89] "kube-controller-manager-ha-285481-m03" [974871d4-bf77-48d6-b5b0-2315381e40f0] Running
	I0315 23:13:35.284848   92071 system_pods.go:89] "kube-proxy-2hcgt" [7dd02c2a-8594-4dcc-b3c9-01e8bf19797d] Running
	I0315 23:13:35.284852   92071 system_pods.go:89] "kube-proxy-cml9m" [a1b0719f-96b2-4671-b09c-583b2c04595e] Running
	I0315 23:13:35.284856   92071 system_pods.go:89] "kube-proxy-d2fjd" [d2fc9b42-7c35-4472-a8de-4f5dafe9d208] Running
	I0315 23:13:35.284860   92071 system_pods.go:89] "kube-scheduler-ha-285481" [06b32208-b1ad-4ad8-90ff-0d4b2fb3ff76] Running
	I0315 23:13:35.284872   92071 system_pods.go:89] "kube-scheduler-ha-285481-m02" [d62a0e22-32f2-4b82-a73f-080674b2acdb] Running
	I0315 23:13:35.284886   92071 system_pods.go:89] "kube-scheduler-ha-285481-m03" [5b522c93-1875-436c-a84f-7a71b1a694f6] Running
	I0315 23:13:35.284892   92071 system_pods.go:89] "kube-vip-ha-285481" [9c3244ae-71d3-41ff-9bcc-c6f1243baf6a] Running
	I0315 23:13:35.284897   92071 system_pods.go:89] "kube-vip-ha-285481-m02" [d369f246-df5e-4b78-a1bb-58317b795b59] Running
	I0315 23:13:35.284906   92071 system_pods.go:89] "kube-vip-ha-285481-m03" [c73a666c-3bb1-4b8c-becc-574021feab19] Running
	I0315 23:13:35.284912   92071 system_pods.go:89] "storage-provisioner" [53d0c1b0-3c5c-443e-a653-9b91407c8792] Running
	I0315 23:13:35.284925   92071 system_pods.go:126] duration metric: took 209.11843ms to wait for k8s-apps to be running ...
	I0315 23:13:35.284938   92071 system_svc.go:44] waiting for kubelet service to be running ....
	I0315 23:13:35.284996   92071 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:13:35.301163   92071 system_svc.go:56] duration metric: took 16.210788ms WaitForService to wait for kubelet
	I0315 23:13:35.301196   92071 kubeadm.go:576] duration metric: took 20.607784825s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:13:35.301216   92071 node_conditions.go:102] verifying NodePressure condition ...
	I0315 23:13:35.470546   92071 request.go:629] Waited for 169.255647ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.23:8443/api/v1/nodes
	I0315 23:13:35.470626   92071 round_trippers.go:463] GET https://192.168.39.23:8443/api/v1/nodes
	I0315 23:13:35.470633   92071 round_trippers.go:469] Request Headers:
	I0315 23:13:35.470643   92071 round_trippers.go:473]     Accept: application/json, */*
	I0315 23:13:35.470650   92071 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0315 23:13:35.476349   92071 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0315 23:13:35.478398   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:13:35.478420   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:13:35.478434   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:13:35.478439   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:13:35.478444   92071 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0315 23:13:35.478448   92071 node_conditions.go:123] node cpu capacity is 2
	I0315 23:13:35.478454   92071 node_conditions.go:105] duration metric: took 177.232968ms to run NodePressure ...
	I0315 23:13:35.478470   92071 start.go:240] waiting for startup goroutines ...
	I0315 23:13:35.478529   92071 start.go:254] writing updated cluster config ...
	I0315 23:13:35.478854   92071 ssh_runner.go:195] Run: rm -f paused
	I0315 23:13:35.533308   92071 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0315 23:13:35.536411   92071 out.go:177] * Done! kubectl is now configured to use "ha-285481" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.186379552Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2073af43-5877-4ef0-af37-8519b53002f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.186791395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2073af43-5877-4ef0-af37-8519b53002f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.238075661Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c52655b0-3be9-4a79-abd7-ced20c3974f3 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.238151429Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c52655b0-3be9-4a79-abd7-ced20c3974f3 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.239213973Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5806054d-f6d3-455e-b88d-4a5e849b357d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.239849734Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544681239820237,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5806054d-f6d3-455e-b88d-4a5e849b357d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.240423846Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=295565de-0f44-4b4b-bf09-ededa830356c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.240483531Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=295565de-0f44-4b4b-bf09-ededa830356c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.240891025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=295565de-0f44-4b4b-bf09-ededa830356c name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.285494128Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df70c62d-75a4-4666-93e8-ba18fc968a8b name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.285612390Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df70c62d-75a4-4666-93e8-ba18fc968a8b name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.286566513Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=10a83e58-b4f7-4da3-8728-9143c5c10345 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.286700344Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=10a83e58-b4f7-4da3-8728-9143c5c10345 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.288963646Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d55e2960-ae06-4cf3-a843-f23381041f96 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.289413026Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544681289373259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d55e2960-ae06-4cf3-a843-f23381041f96 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.290162045Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=194e18f6-e963-4ce9-87b9-6632a6060eb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.290241956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=194e18f6-e963-4ce9-87b9-6632a6060eb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.291579788Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=194e18f6-e963-4ce9-87b9-6632a6060eb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.341538789Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d49bf556-2b64-488c-8282-bf8d4e0d3f34 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.341612436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d49bf556-2b64-488c-8282-bf8d4e0d3f34 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.343011493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=391aa290-09d1-47c1-9377-45839160c326 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.343479104Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710544681343455203,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=391aa290-09d1-47c1-9377-45839160c326 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.344211414Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=095ef3de-7a85-489b-b6e6-f0c05fdee3bd name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.344268517Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=095ef3de-7a85-489b-b6e6-f0c05fdee3bd name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:18:01 ha-285481 crio[682]: time="2024-03-15 23:18:01.344550979Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544418859914017,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:1,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544318881987673,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a3c057a91d21ac7ed19c45899e678a81785b3fddcdee79bd7d4cd802bd18856,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710544317874254587,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-
log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53,PodSandboxId:de78c4c3104b5e7c34acbbaf32ef7fddf5ad12f394654436507036bdaa62aa5d,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256792991877,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,
\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e6eb1af2d4d6a9703ac52119ff7b930afab55e1aaf433ad2d35d85dbed5fbdd,PodSandboxId:fef4071ee48b77acba4d259e531a6043af4c49a8b927ba3911476e395448e154,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544256716714164,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407
c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316,PodSandboxId:99ea9ee0a5c9bee740135618942437e2bca10e3d6c15ce6286b6392b58457434,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544256660795360,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]
string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7,PodSandboxId:2cfe44ef271503f8a240624249884bd2dc56bafc445ade78653c06bdae50109e,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544255021262530,Label
s:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544251853151496,Labels:map[string]string{io.kubernetes.c
ontainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:66a2aef00e4d99ad269a0f18c82b1777bd9139c6f0f23acaaee706ad77807889,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544235638562934,Labels:map[string]string{io.kubernetes.cont
ainer.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db,PodSandboxId:153d2f487f07dd55c278a998a4db57227d8334035d8c41157a72d9f0cda00d35,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544232650972602,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kub
ernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544232586229119,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube
-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079,PodSandboxId:52b4af4847f8c7538cb5c851d4270894ef16ed78413f7ed29224a04110732e3b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544232519014523,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-co
ntroller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544232506135438,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namesp
ace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=095ef3de-7a85-489b-b6e6-f0c05fdee3bd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7e21f8e6f1787       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago       Running             busybox                   0                   8857e9f8aa447       busybox-5b5d89c9d6-klvd7
	213c94783e488       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      6 minutes ago       Running             kube-vip                  1                   0a7887e08f455       kube-vip-ha-285481
	3a3c057a91d21       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       1                   fef4071ee48b7       storage-provisioner
	3f54e9bdd6145       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   de78c4c3104b5       coredns-5dd5756b68-9c44k
	8e6eb1af2d4d6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago       Exited              storage-provisioner       0                   fef4071ee48b7       storage-provisioner
	46eabb63fd66f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago       Running             coredns                   0                   99ea9ee0a5c9b       coredns-5dd5756b68-qxtp4
	047f19229a080       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago       Running             kindnet-cni               0                   2cfe44ef27150       kindnet-9fd6f
	e7c7732963470       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago       Running             kube-proxy                0                   5404a98a681ea       kube-proxy-cml9m
	66a2aef00e4d9       ghcr.io/kube-vip/kube-vip@sha256:58ce44dc60694b0aa547d87d4a8337133961d3a8538021a672ba9bd33b267c9a     7 minutes ago       Exited              kube-vip                  0                   0a7887e08f455       kube-vip-ha-285481
	bc2a1703be0ef       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago       Running             kube-apiserver            0                   153d2f487f07d       kube-apiserver-ha-285481
	b1799ad1e14d3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago       Running             kube-scheduler            0                   9a7f75d914382       kube-scheduler-ha-285481
	122f4a81c61ff       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago       Running             kube-controller-manager   0                   52b4af4847f8c       kube-controller-manager-ha-285481
	a6eaa3307ddf1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago       Running             etcd                      0                   8e777ceb1c377       etcd-ha-285481
	
	
	==> coredns [3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53] <==
	[INFO] 10.244.1.2:60720 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000102247s
	[INFO] 10.244.2.2:40209 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000212669s
	[INFO] 10.244.2.2:50711 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000188088s
	[INFO] 10.244.2.2:54282 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186987s
	[INFO] 10.244.2.2:56388 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.0028159s
	[INFO] 10.244.2.2:35533 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000228914s
	[INFO] 10.244.0.4:60496 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116771s
	[INFO] 10.244.0.4:52905 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000137131s
	[INFO] 10.244.0.4:56100 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107905s
	[INFO] 10.244.0.4:35690 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001556122s
	[INFO] 10.244.0.4:38982 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00024934s
	[INFO] 10.244.1.2:41443 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000153641s
	[INFO] 10.244.1.2:38021 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000125707s
	[INFO] 10.244.1.2:54662 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000104577s
	[INFO] 10.244.1.2:58084 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000186921s
	[INFO] 10.244.2.2:43382 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000132733s
	[INFO] 10.244.2.2:34481 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000081396s
	[INFO] 10.244.0.4:49529 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097682s
	[INFO] 10.244.0.4:53261 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000080312s
	[INFO] 10.244.1.2:48803 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000148992s
	[INFO] 10.244.1.2:55840 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107531s
	[INFO] 10.244.2.2:34212 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.003902135s
	[INFO] 10.244.0.4:33277 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000128105s
	[INFO] 10.244.0.4:48728 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000114263s
	[INFO] 10.244.1.2:37155 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00021268s
	
	
	==> coredns [46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316] <==
	[INFO] 10.244.1.2:33779 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.001884513s
	[INFO] 10.244.2.2:59691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00342241s
	[INFO] 10.244.2.2:39895 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000176934s
	[INFO] 10.244.2.2:39778 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000147498s
	[INFO] 10.244.0.4:45123 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001734195s
	[INFO] 10.244.0.4:47704 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000197308s
	[INFO] 10.244.0.4:41096 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008016s
	[INFO] 10.244.1.2:33672 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00016687s
	[INFO] 10.244.1.2:44656 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001947816s
	[INFO] 10.244.1.2:34454 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000181291s
	[INFO] 10.244.1.2:57821 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001542248s
	[INFO] 10.244.2.2:50572 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014063s
	[INFO] 10.244.2.2:48373 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000151728s
	[INFO] 10.244.0.4:34408 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00010262s
	[INFO] 10.244.0.4:39266 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000108924s
	[INFO] 10.244.1.2:55315 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000217818s
	[INFO] 10.244.1.2:36711 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00009775s
	[INFO] 10.244.2.2:41992 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00201605s
	[INFO] 10.244.2.2:57037 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000156874s
	[INFO] 10.244.2.2:46561 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165147s
	[INFO] 10.244.0.4:54226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00009066s
	[INFO] 10.244.0.4:55001 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000129509s
	[INFO] 10.244.1.2:48297 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000160887s
	[INFO] 10.244.1.2:55268 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000112442s
	[INFO] 10.244.1.2:45416 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000077587s
	
	
	==> describe nodes <==
	Name:               ha-285481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T23_10_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:10:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:18:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:13:46 +0000   Fri, 15 Mar 2024 23:10:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-285481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7afae64232d041e98363d899e90f24b0
	  System UUID:                7afae642-32d0-41e9-8363-d899e90f24b0
	  Boot ID:                    ac63bdb2-abe3-40ea-a654-ca3224dec308
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-klvd7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 coredns-5dd5756b68-9c44k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 coredns-5dd5756b68-qxtp4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m10s
	  kube-system                 etcd-ha-285481                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m22s
	  kube-system                 kindnet-9fd6f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m10s
	  kube-system                 kube-apiserver-ha-285481             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-controller-manager-ha-285481    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-proxy-cml9m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kube-scheduler-ha-285481             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m22s
	  kube-system                 kube-vip-ha-285481                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m9s   kube-proxy       
	  Normal  Starting                 7m22s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m22s  kubelet          Node ha-285481 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m22s  kubelet          Node ha-285481 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m22s  kubelet          Node ha-285481 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m22s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m11s  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal  NodeReady                7m5s   kubelet          Node ha-285481 status is now: NodeReady
	  Normal  RegisteredNode           5m46s  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal  RegisteredNode           4m33s  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	
	
	Name:               ha-285481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_12_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:11:46 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:14:39 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 23:13:45 +0000   Fri, 15 Mar 2024 23:15:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-285481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f269fbf2ace479a8b9438486949ceb1
	  System UUID:                7f269fbf-2ace-479a-8b94-38486949ceb1
	  Boot ID:                    e6c50974-8177-41ea-975f-125b0237e5fe
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tgxps                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 etcd-ha-285481-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m14s
	  kube-system                 kindnet-pnxpk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m15s
	  kube-system                 kube-apiserver-ha-285481-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m14s
	  kube-system                 kube-controller-manager-ha-285481-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-proxy-2hcgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m15s
	  kube-system                 kube-scheduler-ha-285481-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-vip-ha-285481-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m54s  kube-proxy       
	  Normal  RegisteredNode  5m46s  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode  4m33s  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  NodeNotReady    2m41s  node-controller  Node ha-285481-m02 status is now: NodeNotReady
	
	
	Name:               ha-285481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_13_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:13:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:17:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:13:42 +0000   Fri, 15 Mar 2024 23:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-285481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 efeed3fefd6b40f689eaa7f1842dcbc9
	  System UUID:                efeed3fe-fd6b-40f6-89ea-a7f1842dcbc9
	  Boot ID:                    b397d06d-e644-4adf-aa7d-d0201c317777
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc7rx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m25s
	  kube-system                 etcd-ha-285481-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m50s
	  kube-system                 kindnet-zptcr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m50s
	  kube-system                 kube-apiserver-ha-285481-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-controller-manager-ha-285481-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m49s
	  kube-system                 kube-proxy-d2fjd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m50s
	  kube-system                 kube-scheduler-ha-285481-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m43s
	  kube-system                 kube-vip-ha-285481-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m48s  kube-proxy       
	  Normal  RegisteredNode  4m46s  node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal  RegisteredNode  4m46s  node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal  RegisteredNode  4m33s  node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	
	
	Name:               ha-285481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_14_18_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:17:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:14:48 +0000   Fri, 15 Mar 2024 23:14:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-285481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8e447d79a3745579ec32c4638493b56
	  System UUID:                d8e447d7-9a37-4557-9ec3-2c4638493b56
	  Boot ID:                    f361951e-559c-44a5-bf42-821f3dd951a8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vzxwb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m44s
	  kube-system                 kube-proxy-sr2rg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m44s (x5 over 3m45s)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s (x5 over 3m45s)  kubelet          Node ha-285481-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s (x5 over 3m45s)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal  NodeReady                3m36s                  kubelet          Node ha-285481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar15 23:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051507] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040599] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar15 23:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.585346] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.665493] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.660439] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075855] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.154877] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.138534] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.233962] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.832504] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.064584] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.460453] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.636379] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.224199] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.090626] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.490228] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.030141] kauditd_printk_skb: 53 callbacks suppressed
	[Mar15 23:11] kauditd_printk_skb: 11 callbacks suppressed
	
	
	==> etcd [a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca] <==
	{"level":"warn","ts":"2024-03-15T23:18:01.385059Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.455245Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.484719Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.547046Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.584207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.632231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.643486Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.648566Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.666698Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.674818Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.68519Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.711981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.718931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.787876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.791596Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.79546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.80351Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.81025Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.81582Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.820462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.824404Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.832331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.840785Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.848475Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:18:01.885048Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 23:18:01 up 8 min,  0 users,  load average: 0.25, 0.41, 0.26
	Linux ha-285481 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [047f19229a080af436cf1e548043f3e3f6749777a94b52ee2bc877427b13acd7] <==
	I0315 23:17:21.959117       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:17:31.967355       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:17:31.967415       1 main.go:227] handling current node
	I0315 23:17:31.967432       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:17:31.967440       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:17:31.967695       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:17:31.967732       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:17:31.967825       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:17:31.967875       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:17:41.974716       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:17:41.974774       1 main.go:227] handling current node
	I0315 23:17:41.974790       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:17:41.974796       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:17:41.974940       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:17:41.974968       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:17:41.975041       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:17:41.975069       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:17:51.986837       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:17:51.986892       1 main.go:227] handling current node
	I0315 23:17:51.986903       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:17:51.986909       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:17:51.987034       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:17:51.987064       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:17:51.987126       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:17:51.987153       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db] <==
	Trace[1235452084]: [5.397257775s] [5.397257775s] END
	I0315 23:12:01.443018       1 trace.go:236] Trace[1775193868]: "Get" accept:application/json, */*,audit-id:b35596a2-86c4-4b9d-91c1-f8df68fb2085,client:192.168.39.23,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (15-Mar-2024 23:11:58.543) (total time: 2899ms):
	Trace[1775193868]: [2.899799978s] [2.899799978s] END
	I0315 23:12:01.444020       1 trace.go:236] Trace[1861158366]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:0ed8b92f-1cd7-4980-a385-ba948b361209,client:192.168.39.201,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:54.814) (total time: 6629ms):
	Trace[1861158366]: ---"Write to database call failed" len:2996,err:etcdserver: leader changed 6629ms (23:12:01.443)
	Trace[1861158366]: [6.629950797s] [6.629950797s] END
	E0315 23:12:01.502448       1 controller.go:193] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-roaihxybwiytqfpuxfgxbi337e\": the object has been modified; please apply your changes to the latest version and try again"
	I0315 23:12:01.505824       1 trace.go:236] Trace[1015479745]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:38b1fbb8-c8fa-4faf-b8ec-bd716d940771,client:192.168.39.201,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:55.128) (total time: 6377ms):
	Trace[1015479745]: ["Create etcd3" audit-id:38b1fbb8-c8fa-4faf-b8ec-bd716d940771,key:/events/kube-system/kube-vip-ha-285481-m02.17bd12fc4599efea,type:*core.Event,resource:events 6376ms (23:11:55.129)
	Trace[1015479745]:  ---"Txn call succeeded" 6376ms (23:12:01.505)]
	Trace[1015479745]: [6.377075122s] [6.377075122s] END
	I0315 23:12:01.508373       1 trace.go:236] Trace[1095726586]: "Patch" accept:application/vnd.kubernetes.protobuf, */*,audit-id:2ea95511-7b5a-437f-a2c4-2d992d4a484d,client:192.168.39.23,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-285481-m02,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:node-controller,verb:PATCH (15-Mar-2024 23:12:00.621) (total time: 886ms):
	Trace[1095726586]: ["GuaranteedUpdate etcd3" audit-id:2ea95511-7b5a-437f-a2c4-2d992d4a484d,key:/minions/ha-285481-m02,type:*core.Node,resource:nodes 886ms (23:12:00.621)
	Trace[1095726586]:  ---"Txn call completed" 883ms (23:12:01.506)]
	Trace[1095726586]: ---"About to apply patch" 883ms (23:12:01.506)
	Trace[1095726586]: [886.94658ms] [886.94658ms] END
	I0315 23:12:01.508974       1 trace.go:236] Trace[769205823]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:d4cd8845-b3b2-43ec-91f4-85e20e5637e1,client:192.168.39.201,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-285481-m02/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (15-Mar-2024 23:11:56.670) (total time: 4838ms):
	Trace[769205823]: ["GuaranteedUpdate etcd3" audit-id:d4cd8845-b3b2-43ec-91f4-85e20e5637e1,key:/minions/ha-285481-m02,type:*core.Node,resource:nodes 4838ms (23:11:56.670)
	Trace[769205823]:  ---"Txn call completed" 4836ms (23:12:01.508)]
	Trace[769205823]: ---"Object stored in database" 4836ms (23:12:01.508)
	Trace[769205823]: [4.838727567s] [4.838727567s] END
	I0315 23:12:01.566708       1 trace.go:236] Trace[319283670]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:a3f07780-b6a4-4adb-96a9-e1b4025cbcf4,client:192.168.39.201,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:56.203) (total time: 5363ms):
	Trace[319283670]: [5.363282306s] [5.363282306s] END
	I0315 23:12:01.568115       1 trace.go:236] Trace[752789856]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:e48e3001-80b9-41bd-b231-80be6effb7f4,client:192.168.39.201,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (15-Mar-2024 23:11:55.199) (total time: 6368ms):
	Trace[752789856]: [6.368395796s] [6.368395796s] END
	
	
	==> kube-controller-manager [122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079] <==
	I0315 23:13:39.635140       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="196.744µs"
	E0315 23:14:16.299719       1 certificate_controller.go:146] Sync csr-dxp76 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-dxp76": the object has been modified; please apply your changes to the latest version and try again
	I0315 23:14:17.768214       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-285481-m04\" does not exist"
	I0315 23:14:17.789114       1 range_allocator.go:380] "Set node PodCIDR" node="ha-285481-m04" podCIDRs=["10.244.3.0/24"]
	I0315 23:14:17.825165       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-vzxwb"
	I0315 23:14:17.836261       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sr2rg"
	I0315 23:14:17.943996       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-4ch5l"
	I0315 23:14:17.953047       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-9lkhd"
	I0315 23:14:18.066329       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-q8k8l"
	I0315 23:14:18.142808       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-cmmxh"
	I0315 23:14:20.507194       1 event.go:307] "Event occurred" object="ha-285481-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller"
	I0315 23:14:20.513850       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-285481-m04"
	I0315 23:14:25.501575       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-285481-m04"
	I0315 23:15:20.731259       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-285481-m04"
	I0315 23:15:20.734416       1 event.go:307] "Event occurred" object="ha-285481-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-285481-m02 status is now: NodeNotReady"
	I0315 23:15:20.757734       1 event.go:307] "Event occurred" object="kube-system/kube-scheduler-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.774611       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-tgxps" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.805552       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-2hcgt" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.826501       1 event.go:307] "Event occurred" object="kube-system/kindnet-pnxpk" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.837819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="62.450903ms"
	I0315 23:15:20.838054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.757µs"
	I0315 23:15:20.852388       1 event.go:307] "Event occurred" object="kube-system/etcd-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.872297       1 event.go:307] "Event occurred" object="kube-system/kube-apiserver-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.883895       1 event.go:307] "Event occurred" object="kube-system/kube-vip-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:15:20.899198       1 event.go:307] "Event occurred" object="kube-system/kube-controller-manager-ha-285481-m02" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	
	
	==> kube-proxy [e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2] <==
	I0315 23:10:52.335496       1 server_others.go:69] "Using iptables proxy"
	I0315 23:10:52.356464       1 node.go:141] Successfully retrieved node IP: 192.168.39.23
	I0315 23:10:52.452610       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:10:52.452703       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:10:52.455121       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:10:52.455940       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:10:52.456590       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:10:52.456708       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:10:52.458279       1 config.go:188] "Starting service config controller"
	I0315 23:10:52.465236       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:10:52.458585       1 config.go:315] "Starting node config controller"
	I0315 23:10:52.465825       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 23:10:52.461576       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:10:52.465925       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:10:52.565730       1 shared_informer.go:318] Caches are synced for service config
	I0315 23:10:52.566925       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:10:52.567844       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938] <==
	W0315 23:10:37.039340       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 23:10:37.039431       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 23:10:37.050033       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 23:10:37.050162       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 23:10:37.074550       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 23:10:37.075079       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0315 23:10:40.089781       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 23:13:11.599531       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-psf5b\": pod kindnet-psf5b is already assigned to node \"ha-285481-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-psf5b" node="ha-285481-m03"
	E0315 23:13:11.600127       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 47ed4790-7753-4041-bcf7-d384de226727(kube-system/kindnet-psf5b) wasn't assumed so cannot be forgotten"
	E0315 23:13:11.600419       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-psf5b\": pod kindnet-psf5b is already assigned to node \"ha-285481-m03\"" pod="kube-system/kindnet-psf5b"
	I0315 23:13:11.600807       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-psf5b" node="ha-285481-m03"
	E0315 23:14:17.875483       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sr2rg\": pod kube-proxy-sr2rg is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sr2rg" node="ha-285481-m04"
	E0315 23:14:17.876096       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 104a5e4c-e568-4936-904d-e82b59620b8b(kube-system/kube-proxy-sr2rg) wasn't assumed so cannot be forgotten"
	E0315 23:14:17.876829       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sr2rg\": pod kube-proxy-sr2rg is already assigned to node \"ha-285481-m04\"" pod="kube-system/kube-proxy-sr2rg"
	I0315 23:14:17.877084       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sr2rg" node="ha-285481-m04"
	E0315 23:14:17.902915       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-4ch5l\": pod kindnet-4ch5l is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-4ch5l" node="ha-285481-m04"
	E0315 23:14:17.903231       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-4ch5l\": pod kindnet-4ch5l is already assigned to node \"ha-285481-m04\"" pod="kube-system/kindnet-4ch5l"
	E0315 23:14:18.042227       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-q8k8l\": pod kube-proxy-q8k8l is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-q8k8l" node="ha-285481-m04"
	E0315 23:14:18.042314       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 3385d524-1d32-4a17-be74-cc7e4ec17cf6(kube-system/kube-proxy-q8k8l) wasn't assumed so cannot be forgotten"
	E0315 23:14:18.042360       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-q8k8l\": pod kube-proxy-q8k8l is already assigned to node \"ha-285481-m04\"" pod="kube-system/kube-proxy-q8k8l"
	I0315 23:14:18.042416       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-q8k8l" node="ha-285481-m04"
	E0315 23:14:18.042955       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-cmmxh\": pod kindnet-cmmxh is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-cmmxh" node="ha-285481-m04"
	E0315 23:14:18.043026       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7c6c2048-28ef-49fc-909a-aad75912b3b1(kube-system/kindnet-cmmxh) wasn't assumed so cannot be forgotten"
	E0315 23:14:18.043052       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-cmmxh\": pod kindnet-cmmxh is already assigned to node \"ha-285481-m04\"" pod="kube-system/kindnet-cmmxh"
	I0315 23:14:18.043082       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-cmmxh" node="ha-285481-m04"
	
	
	==> kubelet <==
	Mar 15 23:13:39 ha-285481 kubelet[1384]: E0315 23:13:39.425986    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:13:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:13:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:13:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:13:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:14:39 ha-285481 kubelet[1384]: E0315 23:14:39.424262    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:14:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:14:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:14:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:14:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:15:39 ha-285481 kubelet[1384]: E0315 23:15:39.425764    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:15:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:15:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:15:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:15:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:16:39 ha-285481 kubelet[1384]: E0315 23:16:39.420145    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:16:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:16:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:16:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:16:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:17:39 ha-285481 kubelet[1384]: E0315 23:17:39.421267    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:17:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:17:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:17:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:17:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-285481 -n ha-285481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-285481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (50.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (381.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-285481 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-285481 -v=7 --alsologtostderr
E0315 23:18:58.905549   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:19:08.403009   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:19:36.087069   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-285481 -v=7 --alsologtostderr: exit status 82 (2m2.053176822s)

                                                
                                                
-- stdout --
	* Stopping node "ha-285481-m04"  ...
	* Stopping node "ha-285481-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:18:03.405736   97780 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:18:03.406014   97780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:18:03.406024   97780 out.go:304] Setting ErrFile to fd 2...
	I0315 23:18:03.406028   97780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:18:03.406744   97780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:18:03.407152   97780 out.go:298] Setting JSON to false
	I0315 23:18:03.407402   97780 mustload.go:65] Loading cluster: ha-285481
	I0315 23:18:03.407995   97780 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:18:03.408090   97780 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:18:03.408274   97780 mustload.go:65] Loading cluster: ha-285481
	I0315 23:18:03.408408   97780 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:18:03.408434   97780 stop.go:39] StopHost: ha-285481-m04
	I0315 23:18:03.408836   97780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:03.408879   97780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:03.423850   97780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33739
	I0315 23:18:03.424397   97780 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:03.425037   97780 main.go:141] libmachine: Using API Version  1
	I0315 23:18:03.425070   97780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:03.425460   97780 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:03.428225   97780 out.go:177] * Stopping node "ha-285481-m04"  ...
	I0315 23:18:03.429820   97780 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 23:18:03.429858   97780 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:18:03.430091   97780 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 23:18:03.430112   97780 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:18:03.432886   97780 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:18:03.433305   97780 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:13:59 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:18:03.433336   97780 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:18:03.433389   97780 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:18:03.433582   97780 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:18:03.433752   97780 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:18:03.433915   97780 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:18:03.524252   97780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 23:18:03.578775   97780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 23:18:03.632735   97780 main.go:141] libmachine: Stopping "ha-285481-m04"...
	I0315 23:18:03.632787   97780 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:18:03.634380   97780 main.go:141] libmachine: (ha-285481-m04) Calling .Stop
	I0315 23:18:03.637877   97780 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 0/120
	I0315 23:18:04.964473   97780 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:18:04.966044   97780 main.go:141] libmachine: Machine "ha-285481-m04" was stopped.
	I0315 23:18:04.966068   97780 stop.go:75] duration metric: took 1.536251067s to stop
	I0315 23:18:04.966119   97780 stop.go:39] StopHost: ha-285481-m03
	I0315 23:18:04.966480   97780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:18:04.966538   97780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:18:04.981471   97780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38427
	I0315 23:18:04.981888   97780 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:18:04.982374   97780 main.go:141] libmachine: Using API Version  1
	I0315 23:18:04.982396   97780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:18:04.982809   97780 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:18:04.985360   97780 out.go:177] * Stopping node "ha-285481-m03"  ...
	I0315 23:18:04.986987   97780 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 23:18:04.987015   97780 main.go:141] libmachine: (ha-285481-m03) Calling .DriverName
	I0315 23:18:04.987229   97780 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 23:18:04.987255   97780 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHHostname
	I0315 23:18:04.989909   97780 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:18:04.990502   97780 main.go:141] libmachine: (ha-285481-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:2e:06", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:12:33 +0000 UTC Type:0 Mac:52:54:00:2c:2e:06 Iaid: IPaddr:192.168.39.248 Prefix:24 Hostname:ha-285481-m03 Clientid:01:52:54:00:2c:2e:06}
	I0315 23:18:04.990535   97780 main.go:141] libmachine: (ha-285481-m03) DBG | domain ha-285481-m03 has defined IP address 192.168.39.248 and MAC address 52:54:00:2c:2e:06 in network mk-ha-285481
	I0315 23:18:04.990660   97780 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHPort
	I0315 23:18:04.990856   97780 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHKeyPath
	I0315 23:18:04.991011   97780 main.go:141] libmachine: (ha-285481-m03) Calling .GetSSHUsername
	I0315 23:18:04.991131   97780 sshutil.go:53] new ssh client: &{IP:192.168.39.248 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m03/id_rsa Username:docker}
	I0315 23:18:05.084185   97780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 23:18:05.139380   97780 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 23:18:05.197531   97780 main.go:141] libmachine: Stopping "ha-285481-m03"...
	I0315 23:18:05.197569   97780 main.go:141] libmachine: (ha-285481-m03) Calling .GetState
	I0315 23:18:05.199153   97780 main.go:141] libmachine: (ha-285481-m03) Calling .Stop
	I0315 23:18:05.202676   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 0/120
	I0315 23:18:06.204368   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 1/120
	I0315 23:18:07.205952   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 2/120
	I0315 23:18:08.207373   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 3/120
	I0315 23:18:09.208910   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 4/120
	I0315 23:18:10.210582   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 5/120
	I0315 23:18:11.212311   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 6/120
	I0315 23:18:12.213725   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 7/120
	I0315 23:18:13.215312   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 8/120
	I0315 23:18:14.216854   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 9/120
	I0315 23:18:15.219127   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 10/120
	I0315 23:18:16.220792   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 11/120
	I0315 23:18:17.222322   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 12/120
	I0315 23:18:18.223692   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 13/120
	I0315 23:18:19.225558   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 14/120
	I0315 23:18:20.227688   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 15/120
	I0315 23:18:21.230190   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 16/120
	I0315 23:18:22.231406   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 17/120
	I0315 23:18:23.232741   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 18/120
	I0315 23:18:24.234238   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 19/120
	I0315 23:18:25.236012   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 20/120
	I0315 23:18:26.237533   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 21/120
	I0315 23:18:27.239126   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 22/120
	I0315 23:18:28.240663   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 23/120
	I0315 23:18:29.242355   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 24/120
	I0315 23:18:30.244448   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 25/120
	I0315 23:18:31.246153   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 26/120
	I0315 23:18:32.247691   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 27/120
	I0315 23:18:33.249291   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 28/120
	I0315 23:18:34.251645   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 29/120
	I0315 23:18:35.253344   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 30/120
	I0315 23:18:36.254598   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 31/120
	I0315 23:18:37.256192   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 32/120
	I0315 23:18:38.257493   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 33/120
	I0315 23:18:39.258889   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 34/120
	I0315 23:18:40.260619   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 35/120
	I0315 23:18:41.262099   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 36/120
	I0315 23:18:42.263569   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 37/120
	I0315 23:18:43.264978   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 38/120
	I0315 23:18:44.266346   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 39/120
	I0315 23:18:45.267735   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 40/120
	I0315 23:18:46.269125   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 41/120
	I0315 23:18:47.270637   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 42/120
	I0315 23:18:48.272040   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 43/120
	I0315 23:18:49.273586   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 44/120
	I0315 23:18:50.275397   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 45/120
	I0315 23:18:51.276701   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 46/120
	I0315 23:18:52.278323   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 47/120
	I0315 23:18:53.280499   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 48/120
	I0315 23:18:54.282104   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 49/120
	I0315 23:18:55.283728   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 50/120
	I0315 23:18:56.285757   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 51/120
	I0315 23:18:57.287013   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 52/120
	I0315 23:18:58.288434   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 53/120
	I0315 23:18:59.289713   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 54/120
	I0315 23:19:00.291722   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 55/120
	I0315 23:19:01.292854   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 56/120
	I0315 23:19:02.294561   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 57/120
	I0315 23:19:03.295836   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 58/120
	I0315 23:19:04.297954   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 59/120
	I0315 23:19:05.299419   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 60/120
	I0315 23:19:06.300716   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 61/120
	I0315 23:19:07.302297   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 62/120
	I0315 23:19:08.303509   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 63/120
	I0315 23:19:09.305866   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 64/120
	I0315 23:19:10.308018   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 65/120
	I0315 23:19:11.309349   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 66/120
	I0315 23:19:12.310740   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 67/120
	I0315 23:19:13.312224   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 68/120
	I0315 23:19:14.313595   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 69/120
	I0315 23:19:15.315507   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 70/120
	I0315 23:19:16.316748   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 71/120
	I0315 23:19:17.318241   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 72/120
	I0315 23:19:18.319838   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 73/120
	I0315 23:19:19.321502   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 74/120
	I0315 23:19:20.322952   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 75/120
	I0315 23:19:21.324420   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 76/120
	I0315 23:19:22.325712   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 77/120
	I0315 23:19:23.327124   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 78/120
	I0315 23:19:24.328694   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 79/120
	I0315 23:19:25.330803   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 80/120
	I0315 23:19:26.332169   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 81/120
	I0315 23:19:27.333762   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 82/120
	I0315 23:19:28.335163   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 83/120
	I0315 23:19:29.336569   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 84/120
	I0315 23:19:30.338448   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 85/120
	I0315 23:19:31.339816   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 86/120
	I0315 23:19:32.341104   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 87/120
	I0315 23:19:33.342543   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 88/120
	I0315 23:19:34.343924   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 89/120
	I0315 23:19:35.345913   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 90/120
	I0315 23:19:36.347357   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 91/120
	I0315 23:19:37.348656   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 92/120
	I0315 23:19:38.349914   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 93/120
	I0315 23:19:39.351266   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 94/120
	I0315 23:19:40.352986   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 95/120
	I0315 23:19:41.355268   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 96/120
	I0315 23:19:42.356657   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 97/120
	I0315 23:19:43.357975   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 98/120
	I0315 23:19:44.359451   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 99/120
	I0315 23:19:45.361415   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 100/120
	I0315 23:19:46.362798   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 101/120
	I0315 23:19:47.364118   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 102/120
	I0315 23:19:48.365748   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 103/120
	I0315 23:19:49.367225   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 104/120
	I0315 23:19:50.368987   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 105/120
	I0315 23:19:51.370303   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 106/120
	I0315 23:19:52.371872   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 107/120
	I0315 23:19:53.373856   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 108/120
	I0315 23:19:54.375101   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 109/120
	I0315 23:19:55.376989   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 110/120
	I0315 23:19:56.378263   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 111/120
	I0315 23:19:57.379567   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 112/120
	I0315 23:19:58.381002   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 113/120
	I0315 23:19:59.382455   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 114/120
	I0315 23:20:00.384445   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 115/120
	I0315 23:20:01.385836   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 116/120
	I0315 23:20:02.387284   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 117/120
	I0315 23:20:03.388636   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 118/120
	I0315 23:20:04.390053   97780 main.go:141] libmachine: (ha-285481-m03) Waiting for machine to stop 119/120
	I0315 23:20:05.390658   97780 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 23:20:05.390729   97780 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 23:20:05.393034   97780 out.go:177] 
	W0315 23:20:05.394733   97780 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 23:20:05.394752   97780 out.go:239] * 
	* 
	W0315 23:20:05.397942   97780 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 23:20:05.399577   97780 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-285481 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-285481 --wait=true -v=7 --alsologtostderr
E0315 23:23:58.906203   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:24:08.402429   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-285481 --wait=true -v=7 --alsologtostderr: (4m16.534809821s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-285481
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-285481 -n ha-285481
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-285481 logs -n 25: (2.131945784s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m04 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp testdata/cp-test.txt                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481:/home/docker/cp-test_ha-285481-m04_ha-285481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481 sudo cat                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03:/home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m03 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-285481 node stop m02 -v=7                                                     | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-285481 node start m02 -v=7                                                    | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-285481 -v=7                                                           | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-285481 -v=7                                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-285481 --wait=true -v=7                                                    | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:20 UTC | 15 Mar 24 23:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-285481                                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:24 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 23:20:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 23:20:05.473334   98161 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:20:05.473520   98161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:20:05.473538   98161 out.go:304] Setting ErrFile to fd 2...
	I0315 23:20:05.473545   98161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:20:05.473903   98161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:20:05.474719   98161 out.go:298] Setting JSON to false
	I0315 23:20:05.476045   98161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7355,"bootTime":1710537450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:20:05.476133   98161 start.go:139] virtualization: kvm guest
	I0315 23:20:05.478845   98161 out.go:177] * [ha-285481] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:20:05.480654   98161 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:20:05.480658   98161 notify.go:220] Checking for updates...
	I0315 23:20:05.481932   98161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:20:05.483432   98161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:20:05.484943   98161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:20:05.486307   98161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:20:05.487615   98161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:20:05.489392   98161 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:20:05.489480   98161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:20:05.489929   98161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:20:05.489970   98161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:20:05.505831   98161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38943
	I0315 23:20:05.506241   98161 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:20:05.506825   98161 main.go:141] libmachine: Using API Version  1
	I0315 23:20:05.506849   98161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:20:05.507209   98161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:20:05.507427   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:20:05.542547   98161 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 23:20:05.544154   98161 start.go:297] selected driver: kvm2
	I0315 23:20:05.544181   98161 start.go:901] validating driver "kvm2" against &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:20:05.544327   98161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:20:05.544652   98161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:20:05.544720   98161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:20:05.560544   98161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:20:05.561300   98161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:20:05.561413   98161 cni.go:84] Creating CNI manager for ""
	I0315 23:20:05.561429   98161 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 23:20:05.561492   98161 start.go:340] cluster config:
	{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:20:05.561639   98161 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:20:05.563713   98161 out.go:177] * Starting "ha-285481" primary control-plane node in "ha-285481" cluster
	I0315 23:20:05.565167   98161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:20:05.565210   98161 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 23:20:05.565226   98161 cache.go:56] Caching tarball of preloaded images
	I0315 23:20:05.565320   98161 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:20:05.565333   98161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:20:05.565468   98161 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:20:05.565691   98161 start.go:360] acquireMachinesLock for ha-285481: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:20:05.565763   98161 start.go:364] duration metric: took 51.025µs to acquireMachinesLock for "ha-285481"
	I0315 23:20:05.565783   98161 start.go:96] Skipping create...Using existing machine configuration
	I0315 23:20:05.565792   98161 fix.go:54] fixHost starting: 
	I0315 23:20:05.566054   98161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:20:05.566097   98161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:20:05.580611   98161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0315 23:20:05.581063   98161 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:20:05.581567   98161 main.go:141] libmachine: Using API Version  1
	I0315 23:20:05.581587   98161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:20:05.581918   98161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:20:05.582088   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:20:05.582299   98161 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:20:05.584067   98161 fix.go:112] recreateIfNeeded on ha-285481: state=Running err=<nil>
	W0315 23:20:05.584083   98161 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 23:20:05.586202   98161 out.go:177] * Updating the running kvm2 "ha-285481" VM ...
	I0315 23:20:05.587724   98161 machine.go:94] provisionDockerMachine start ...
	I0315 23:20:05.587741   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:20:05.587924   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.590475   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.590936   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.590964   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.591142   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:05.591342   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.591517   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.591659   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:05.591840   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:05.592010   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:05.592021   98161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 23:20:05.700908   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481
	
	I0315 23:20:05.700936   98161 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:20:05.701218   98161 buildroot.go:166] provisioning hostname "ha-285481"
	I0315 23:20:05.701253   98161 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:20:05.701460   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.704335   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.704750   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.704776   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.704974   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:05.705204   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.705387   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.705539   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:05.705747   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:05.706019   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:05.706041   98161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481 && echo "ha-285481" | sudo tee /etc/hostname
	I0315 23:20:05.827360   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481
	
	I0315 23:20:05.827411   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.830463   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.830920   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.830950   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.831156   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:05.831384   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.831569   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.831725   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:05.831929   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:05.832158   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:05.832175   98161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:20:05.932204   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:20:05.932234   98161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:20:05.932264   98161 buildroot.go:174] setting up certificates
	I0315 23:20:05.932273   98161 provision.go:84] configureAuth start
	I0315 23:20:05.932281   98161 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:20:05.932657   98161 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:20:05.935436   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.935846   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.935873   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.936016   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.938456   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.938768   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.938806   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.938908   98161 provision.go:143] copyHostCerts
	I0315 23:20:05.938948   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:20:05.939012   98161 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:20:05.939023   98161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:20:05.939105   98161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:20:05.939233   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:20:05.939260   98161 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:20:05.939269   98161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:20:05.939308   98161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:20:05.939401   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:20:05.939424   98161 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:20:05.939434   98161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:20:05.939468   98161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:20:05.939544   98161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481 san=[127.0.0.1 192.168.39.23 ha-285481 localhost minikube]
	I0315 23:20:06.044007   98161 provision.go:177] copyRemoteCerts
	I0315 23:20:06.044102   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:20:06.044132   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:06.047154   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.047563   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:06.047587   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.047771   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:06.047971   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:06.048209   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:06.048386   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:20:06.134348   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:20:06.134433   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 23:20:06.164481   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:20:06.164572   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:20:06.194126   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:20:06.194232   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:20:06.223304   98161 provision.go:87] duration metric: took 291.013841ms to configureAuth
	I0315 23:20:06.223345   98161 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:20:06.223649   98161 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:20:06.223739   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:06.226888   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.227367   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:06.227397   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.227677   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:06.227847   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:06.228033   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:06.228181   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:06.228358   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:06.228551   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:06.228572   98161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:21:37.081788   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:21:37.081884   98161 machine.go:97] duration metric: took 1m31.494128507s to provisionDockerMachine
	I0315 23:21:37.081902   98161 start.go:293] postStartSetup for "ha-285481" (driver="kvm2")
	I0315 23:21:37.081974   98161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:21:37.082005   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.082374   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:21:37.082404   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.085889   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.086457   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.086488   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.086665   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.086887   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.087026   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.087174   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:21:37.172388   98161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:21:37.177380   98161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:21:37.177409   98161 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:21:37.177487   98161 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:21:37.177561   98161 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:21:37.177592   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:21:37.177673   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:21:37.188641   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:21:37.218040   98161 start.go:296] duration metric: took 136.119484ms for postStartSetup
	I0315 23:21:37.218110   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.218448   98161 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 23:21:37.218477   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.221520   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.221973   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.222003   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.222069   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.222266   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.222443   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.222680   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	W0315 23:21:37.302825   98161 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 23:21:37.302851   98161 fix.go:56] duration metric: took 1m31.737061421s for fixHost
	I0315 23:21:37.302875   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.305457   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.305843   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.305871   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.306039   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.306269   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.306438   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.306567   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.306755   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:21:37.306933   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:21:37.306944   98161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:21:37.408903   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544897.375688930
	
	I0315 23:21:37.408929   98161 fix.go:216] guest clock: 1710544897.375688930
	I0315 23:21:37.408936   98161 fix.go:229] Guest: 2024-03-15 23:21:37.37568893 +0000 UTC Remote: 2024-03-15 23:21:37.302859814 +0000 UTC m=+91.892073006 (delta=72.829116ms)
	I0315 23:21:37.408965   98161 fix.go:200] guest clock delta is within tolerance: 72.829116ms
	I0315 23:21:37.408971   98161 start.go:83] releasing machines lock for "ha-285481", held for 1m31.843195787s
	I0315 23:21:37.408989   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.409274   98161 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:21:37.412007   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.412387   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.412436   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.412551   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.413180   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.413403   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.413528   98161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:21:37.413595   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.413620   98161 ssh_runner.go:195] Run: cat /version.json
	I0315 23:21:37.413641   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.416298   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.416564   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.416627   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.416653   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.416782   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.417015   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.417090   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.417119   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.417172   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.417233   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.417661   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:21:37.417693   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.417885   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.418072   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:21:37.493035   98161 ssh_runner.go:195] Run: systemctl --version
	I0315 23:21:37.521636   98161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:21:37.688419   98161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:21:37.701191   98161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:21:37.701269   98161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:21:37.712058   98161 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 23:21:37.712084   98161 start.go:494] detecting cgroup driver to use...
	I0315 23:21:37.712144   98161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:21:37.730725   98161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:21:37.745988   98161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:21:37.746046   98161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:21:37.761556   98161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:21:37.776923   98161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:21:37.950845   98161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:21:38.129846   98161 docker.go:233] disabling docker service ...
	I0315 23:21:38.129914   98161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:21:38.157939   98161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:21:38.209719   98161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:21:38.484160   98161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:21:38.723303   98161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:21:38.755250   98161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:21:38.789927   98161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:21:38.790042   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.810926   98161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:21:38.811014   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.823490   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.839713   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.851209   98161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:21:38.863308   98161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:21:38.873879   98161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:21:38.884928   98161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:21:39.038018   98161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:21:49.211714   98161 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.173636255s)
	I0315 23:21:49.211746   98161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:21:49.211812   98161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:21:49.217287   98161 start.go:562] Will wait 60s for crictl version
	I0315 23:21:49.217346   98161 ssh_runner.go:195] Run: which crictl
	I0315 23:21:49.221613   98161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:21:49.261929   98161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:21:49.262023   98161 ssh_runner.go:195] Run: crio --version
	I0315 23:21:49.293216   98161 ssh_runner.go:195] Run: crio --version
	I0315 23:21:49.325628   98161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:21:49.327131   98161 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:21:49.329904   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:49.330323   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:49.330351   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:49.330546   98161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:21:49.335523   98161 kubeadm.go:877] updating cluster {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 23:21:49.335667   98161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:21:49.335713   98161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:21:49.388250   98161 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:21:49.388278   98161 crio.go:415] Images already preloaded, skipping extraction
	I0315 23:21:49.388340   98161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:21:49.424625   98161 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:21:49.424660   98161 cache_images.go:84] Images are preloaded, skipping loading
	I0315 23:21:49.424695   98161 kubeadm.go:928] updating node { 192.168.39.23 8443 v1.28.4 crio true true} ...
	I0315 23:21:49.424826   98161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:21:49.424910   98161 ssh_runner.go:195] Run: crio config
	I0315 23:21:49.473284   98161 cni.go:84] Creating CNI manager for ""
	I0315 23:21:49.473312   98161 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 23:21:49.473326   98161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 23:21:49.473354   98161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-285481 NodeName:ha-285481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 23:21:49.473515   98161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-285481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 23:21:49.473542   98161 kube-vip.go:111] generating kube-vip config ...
	I0315 23:21:49.473592   98161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:21:49.486548   98161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:21:49.486676   98161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:21:49.486736   98161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:21:49.497506   98161 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 23:21:49.497596   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 23:21:49.508513   98161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 23:21:49.526819   98161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:21:49.544723   98161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 23:21:49.562193   98161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:21:49.579487   98161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:21:49.584268   98161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:21:49.739015   98161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:21:49.754696   98161 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.23
	I0315 23:21:49.754720   98161 certs.go:194] generating shared ca certs ...
	I0315 23:21:49.754764   98161 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:21:49.754971   98161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:21:49.755019   98161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:21:49.755031   98161 certs.go:256] generating profile certs ...
	I0315 23:21:49.755136   98161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:21:49.755168   98161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a
	I0315 23:21:49.755194   98161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.201 192.168.39.248 192.168.39.254]
	I0315 23:21:49.833761   98161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a ...
	I0315 23:21:49.833801   98161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a: {Name:mka8dc5e0a5c882cbf8137eb9dd03f3b19698962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:21:49.834026   98161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a ...
	I0315 23:21:49.834046   98161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a: {Name:mk9a4f673a35e8084eec2bb643e2d38493c92b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:21:49.834153   98161 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:21:49.834314   98161 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:21:49.834483   98161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:21:49.834503   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:21:49.834518   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:21:49.834542   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:21:49.834564   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:21:49.834580   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:21:49.834595   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:21:49.834612   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:21:49.834626   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:21:49.834684   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:21:49.834725   98161 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:21:49.834738   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:21:49.834763   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:21:49.834792   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:21:49.834812   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:21:49.834854   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:21:49.834887   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:21:49.834910   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:49.834924   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:21:49.835648   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:21:49.864340   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:21:49.892431   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:21:49.918794   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:21:49.945988   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 23:21:49.971706   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:21:49.998672   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:21:50.025893   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:21:50.051995   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:21:50.079274   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:21:50.107872   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:21:50.136261   98161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 23:21:50.154686   98161 ssh_runner.go:195] Run: openssl version
	I0315 23:21:50.161740   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:21:50.174898   98161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:21:50.180368   98161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:21:50.180465   98161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:21:50.186848   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:21:50.197192   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:21:50.209246   98161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:21:50.214716   98161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:21:50.214786   98161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:21:50.227111   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:21:50.242928   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:21:50.255039   98161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:50.259947   98161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:50.260030   98161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:50.266329   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:21:50.276426   98161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:21:50.281225   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 23:21:50.287203   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 23:21:50.293325   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 23:21:50.299543   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 23:21:50.305761   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 23:21:50.311986   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 23:21:50.318207   98161 kubeadm.go:391] StartCluster: {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:21:50.318360   98161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 23:21:50.318494   98161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 23:21:50.359937   98161 cri.go:89] found id: "c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00"
	I0315 23:21:50.359966   98161 cri.go:89] found id: "f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d"
	I0315 23:21:50.359973   98161 cri.go:89] found id: "9d334bd492573c9230531332ad4ebbf1608dda09f4fdb76d29eedbf1830fb84c"
	I0315 23:21:50.359978   98161 cri.go:89] found id: "61e89e5375f385fd70fe2392aab4c6a216de1586dc519260fdc0a649ab724d90"
	I0315 23:21:50.359982   98161 cri.go:89] found id: "a6d28b03cb917d55f42fba769ea3d6b48a0e57137c6870c4d878c4edd3e812a6"
	I0315 23:21:50.359987   98161 cri.go:89] found id: "b450101b891fe9ce8fa24f56acdf7c4b48513c4f86b9760e54807cf4d2f8a42d"
	I0315 23:21:50.359990   98161 cri.go:89] found id: "4b9eb5f63654c989746462193024e76ae0e8c941899ac5f4c5f7ed7a25755404"
	I0315 23:21:50.359993   98161 cri.go:89] found id: "706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb"
	I0315 23:21:50.359995   98161 cri.go:89] found id: "213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1"
	I0315 23:21:50.360007   98161 cri.go:89] found id: "3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53"
	I0315 23:21:50.360010   98161 cri.go:89] found id: "46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316"
	I0315 23:21:50.360013   98161 cri.go:89] found id: "e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2"
	I0315 23:21:50.360022   98161 cri.go:89] found id: "bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db"
	I0315 23:21:50.360024   98161 cri.go:89] found id: "b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938"
	I0315 23:21:50.360028   98161 cri.go:89] found id: "122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079"
	I0315 23:21:50.360031   98161 cri.go:89] found id: "a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca"
	I0315 23:21:50.360033   98161 cri.go:89] found id: ""
	I0315 23:21:50.360083   98161 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.751445830Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545062751415192,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7fde67b7-e7f4-4c38-ab86-6ab0debd11ef name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.752478312Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3126c7a7-c5d3-49e1-9a9d-6c4d2eeda471 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.752559557Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3126c7a7-c5d3-49e1-9a9d-6c4d2eeda471 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.753072376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes
.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-
proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291ca
fdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe
,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.
kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash:
aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aa
ebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec
35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff144
24cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CO
NTAINER_EXITED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544
232506307460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3126c7a7-c5d3-49e1-9a9d-6c4d2eeda471 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.802599893Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0377d812-70ed-47fd-866e-81f79c59422b name=/runtime.v1.RuntimeService/Version
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.802741848Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0377d812-70ed-47fd-866e-81f79c59422b name=/runtime.v1.RuntimeService/Version
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.804299958Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=907f0658-a4f1-4f6c-a5c0-86a8aafa010e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.804891835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545062804859032,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=907f0658-a4f1-4f6c-a5c0-86a8aafa010e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.805463555Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4b4c78f-1042-40cc-af22-7f3593cbc21e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.805532834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4b4c78f-1042-40cc-af22-7f3593cbc21e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.806946521Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes
.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-
proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291ca
fdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe
,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.
kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash:
aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aa
ebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec
35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff144
24cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CO
NTAINER_EXITED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544
232506307460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4b4c78f-1042-40cc-af22-7f3593cbc21e name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.865931614Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17ff3240-cade-459c-b47b-a26258ff72fc name=/runtime.v1.RuntimeService/Version
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.866034395Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17ff3240-cade-459c-b47b-a26258ff72fc name=/runtime.v1.RuntimeService/Version
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.867218419Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=18e60227-c3a7-49c4-902b-89e162337e5c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.867893365Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545062867855930,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=18e60227-c3a7-49c4-902b-89e162337e5c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.868583268Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d6e0cc60-b4c5-49b7-8c10-ce9aff1bcfb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.868714053Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d6e0cc60-b4c5-49b7-8c10-ce9aff1bcfb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.869149299Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes
.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-
proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291ca
fdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe
,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.
kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash:
aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aa
ebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec
35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff144
24cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CO
NTAINER_EXITED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544
232506307460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d6e0cc60-b4c5-49b7-8c10-ce9aff1bcfb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.951767506Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d89f24d3-59a7-4612-bbf6-003928387569 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.951845406Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d89f24d3-59a7-4612-bbf6-003928387569 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.955846852Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=15e96070-c249-48ae-a961-c2bf5fdc901f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.956414808Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545062956386148,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=15e96070-c249-48ae-a961-c2bf5fdc901f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.957147760Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1971ac4a-8852-4afb-9982-d114c5c2704d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.957229642Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1971ac4a-8852-4afb-9982-d114c5c2704d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:24:22 ha-285481 crio[4345]: time="2024-03-15 23:24:22.957699146Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termi
nationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\
"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes
.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-
proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,
io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291ca
fdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe
,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[str
ing]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.
kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash:
aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.
name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aa
ebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec
35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff144
24cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CO
NTAINER_EXITED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544
232506307460,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1971ac4a-8852-4afb-9982-d114c5c2704d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	7aa0725072636       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               4                   a8569823c9966       kindnet-9fd6f
	bddf2a55a13f4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   b6941d02e47f9       kube-controller-manager-ha-285481
	2dae1e7bd9a21       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   ca198fa2b4b26       kube-apiserver-ha-285481
	7cd54d913bb97       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   7045b908ab1b8       busybox-5b5d89c9d6-klvd7
	ca86ee48e13d9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   2                   d07c930b8ebca       coredns-5dd5756b68-qxtp4
	915cb8e3716e7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   2                   db6cf9923e6bd       coredns-5dd5756b68-9c44k
	189cc226d08d1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   fcd137b8d3b9f       kube-proxy-cml9m
	64c336a92304b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               3                   a8569823c9966       kindnet-9fd6f
	a32853d47c1c9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   8317b128a7c91       etcd-ha-285481
	931143ffbabb9       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   ca198fa2b4b26       kube-apiserver-ha-285481
	809dd4d909af8       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Running             kube-vip                  3                   5a920e208332e       kube-vip-ha-285481
	97929629ecc64       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   aa73d4571ec49       kube-scheduler-ha-285481
	803b9cd3104df       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   b6941d02e47f9       kube-controller-manager-ha-285481
	4a6e055fb6555       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       5                   87554e8de736b       storage-provisioner
	c09a826d051f4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Exited              coredns                   1                   6689feff0da96       coredns-5dd5756b68-qxtp4
	f160f73a45516       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Exited              coredns                   1                   68895f60fb2d2       coredns-5dd5756b68-9c44k
	706f8e951e3f0       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago        Exited              kube-vip                  2                   0a7887e08f455       kube-vip-ha-285481
	7e21f8e6f1787       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   8857e9f8aa447       busybox-5b5d89c9d6-klvd7
	e7c7732963470       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   5404a98a681ea       kube-proxy-cml9m
	b1799ad1e14d3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago       Exited              kube-scheduler            0                   9a7f75d914382       kube-scheduler-ha-285481
	a6eaa3307ddf1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago       Exited              etcd                      0                   8e777ceb1c377       etcd-ha-285481
	
	
	==> coredns [915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59945 - 7882 "HINFO IN 146871251892497703.7499368807378574144. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021073832s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:42795 - 12711 "HINFO IN 2521867098849721333.7657461000445057215. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018792575s
	
	
	==> coredns [ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39952 - 23738 "HINFO IN 3992818473559059300.5048460502519144909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021732356s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46860 - 14168 "HINFO IN 6553267951195986899.7024224011946022153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020673615s
	
	
	==> describe nodes <==
	Name:               ha-285481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T23_10_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:10:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:24:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:22:50 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:22:50 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:22:50 +0000   Fri, 15 Mar 2024 23:10:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:22:50 +0000   Fri, 15 Mar 2024 23:10:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-285481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7afae64232d041e98363d899e90f24b0
	  System UUID:                7afae642-32d0-41e9-8363-d899e90f24b0
	  Boot ID:                    ac63bdb2-abe3-40ea-a654-ca3224dec308
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-klvd7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-9c44k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5dd5756b68-qxtp4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-285481                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-9fd6f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-285481             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-285481    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-cml9m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-285481             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-285481                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 105s                   kube-proxy       
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-285481 status is now: NodeHasSufficientPID
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-285481 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-285481 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-285481 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Warning  ContainerGCFailed        2m44s (x2 over 3m44s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           91s                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           90s                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           31s                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	
	
	Name:               ha-285481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_12_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:11:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:24:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-285481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f269fbf2ace479a8b9438486949ceb1
	  System UUID:                7f269fbf-2ace-479a-8b94-38486949ceb1
	  Boot ID:                    9b2251a9-b538-488f-a580-06286d5f2e17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tgxps                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-285481-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-pnxpk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-285481-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-285481-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-2hcgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-285481-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-285481-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 80s                  kube-proxy       
	  Normal  RegisteredNode           12m                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  NodeNotReady             9m3s                 node-controller  Node ha-285481-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node ha-285481-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node ha-285481-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node ha-285481-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           91s                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode           90s                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode           31s                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	
	
	Name:               ha-285481-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_13_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:13:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:24:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:23:54 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:23:54 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:23:54 +0000   Fri, 15 Mar 2024 23:13:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:23:54 +0000   Fri, 15 Mar 2024 23:13:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.248
	  Hostname:    ha-285481-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 efeed3fefd6b40f689eaa7f1842dcbc9
	  System UUID:                efeed3fe-fd6b-40f6-89ea-a7f1842dcbc9
	  Boot ID:                    3a08578c-1007-41b2-b0ea-78d693b8541d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-cc7rx                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-285481-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-zptcr                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-285481-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-285481-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-d2fjd                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-285481-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-285481-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 11m                kube-proxy       
	  Normal   Starting                 39s                kube-proxy       
	  Normal   RegisteredNode           11m                node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal   RegisteredNode           11m                node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal   RegisteredNode           91s                node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal   RegisteredNode           90s                node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  60s (x2 over 60s)  kubelet          Node ha-285481-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s (x2 over 60s)  kubelet          Node ha-285481-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s (x2 over 60s)  kubelet          Node ha-285481-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 60s                kubelet          Node ha-285481-m03 has been rebooted, boot id: 3a08578c-1007-41b2-b0ea-78d693b8541d
	  Normal   RegisteredNode           31s                node-controller  Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller
	
	
	Name:               ha-285481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_14_18_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:14:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:24:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:24:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:24:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:24:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:24:15 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-285481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8e447d79a3745579ec32c4638493b56
	  System UUID:                d8e447d7-9a37-4557-9ec3-2c4638493b56
	  Boot ID:                    584eb9ad-42a9-4678-9ac1-7e878b8d2dbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-vzxwb       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-sr2rg    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 4s                 kube-proxy       
	  Normal   Starting                 10m                kube-proxy       
	  Normal   NodeHasSufficientMemory  10m (x5 over 10m)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m (x5 over 10m)  kubelet          Node ha-285481-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x5 over 10m)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           10m                node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   NodeReady                9m58s              kubelet          Node ha-285481-m04 status is now: NodeReady
	  Normal   RegisteredNode           91s                node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           90s                node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   NodeNotReady             51s                node-controller  Node ha-285481-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           31s                node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   Starting                 8s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)    kubelet          Node ha-285481-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)    kubelet          Node ha-285481-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)    kubelet          Node ha-285481-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                 kubelet          Node ha-285481-m04 has been rebooted, boot id: 584eb9ad-42a9-4678-9ac1-7e878b8d2dbf
	  Normal   NodeReady                8s                 kubelet          Node ha-285481-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +9.660439] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075855] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.154877] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.138534] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.233962] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.832504] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.064584] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.460453] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.636379] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.224199] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.090626] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.490228] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.030141] kauditd_printk_skb: 53 callbacks suppressed
	[Mar15 23:11] kauditd_printk_skb: 11 callbacks suppressed
	[Mar15 23:18] kauditd_printk_skb: 1 callbacks suppressed
	[Mar15 23:21] systemd-fstab-generator[4051]: Ignoring "noauto" option for root device
	[  +0.177892] systemd-fstab-generator[4063]: Ignoring "noauto" option for root device
	[  +0.307408] systemd-fstab-generator[4128]: Ignoring "noauto" option for root device
	[  +0.241200] systemd-fstab-generator[4195]: Ignoring "noauto" option for root device
	[  +0.365007] systemd-fstab-generator[4324]: Ignoring "noauto" option for root device
	[ +10.693841] systemd-fstab-generator[4479]: Ignoring "noauto" option for root device
	[  +0.099959] kauditd_printk_skb: 120 callbacks suppressed
	[Mar15 23:22] kauditd_printk_skb: 108 callbacks suppressed
	[ +28.844834] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991] <==
	{"level":"warn","ts":"2024-03-15T23:23:17.351507Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:23:17.451103Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:23:17.551177Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:23:17.651116Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"c6baa4636f442c95","from":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-15T23:23:19.036704Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"de3c7347423bf565","rtt":"0s","error":"dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:19.036818Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de3c7347423bf565","rtt":"0s","error":"dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:20.144159Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.248:2380/version","remote-member-id":"de3c7347423bf565","error":"Get \"https://192.168.39.248:2380/version\": dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:20.144336Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"de3c7347423bf565","error":"Get \"https://192.168.39.248:2380/version\": dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:24.037797Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de3c7347423bf565","rtt":"0s","error":"dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:24.03792Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"de3c7347423bf565","rtt":"0s","error":"dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:24.146897Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.248:2380/version","remote-member-id":"de3c7347423bf565","error":"Get \"https://192.168.39.248:2380/version\": dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:24.147013Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"de3c7347423bf565","error":"Get \"https://192.168.39.248:2380/version\": dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:28.148471Z","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL","address":"https://192.168.39.248:2380/version","remote-member-id":"de3c7347423bf565","error":"Get \"https://192.168.39.248:2380/version\": dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:28.148782Z","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remote-member-id":"de3c7347423bf565","error":"Get \"https://192.168.39.248:2380/version\": dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:29.038893Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"de3c7347423bf565","rtt":"0s","error":"dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:29.039026Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"de3c7347423bf565","rtt":"0s","error":"dial tcp 192.168.39.248:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-15T23:23:30.64938Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"200.259605ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:4 size:19358"}
	{"level":"info","ts":"2024-03-15T23:23:30.649773Z","caller":"traceutil/trace.go:171","msg":"trace[653062951] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:4; response_revision:2194; }","duration":"200.717278ms","start":"2024-03-15T23:23:30.449024Z","end":"2024-03-15T23:23:30.649741Z","steps":["trace[653062951] 'range keys from in-memory index tree'  (duration: 199.314742ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:23:31.393922Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.401474Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c6baa4636f442c95","to":"de3c7347423bf565","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T23:23:31.401589Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.407Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.407524Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.409911Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c6baa4636f442c95","to":"de3c7347423bf565","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T23:23:31.410099Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	
	
	==> etcd [a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca] <==
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-15T23:20:06.628082Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3212630333545603566,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T23:20:06.629359Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:20:06.629394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T23:20:06.629465Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c6baa4636f442c95","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T23:20:06.629678Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.629734Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.629862Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.629982Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.630054Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.630126Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.63014Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.630145Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630153Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630179Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.63025Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630298Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.63033Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630361Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.633146Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-03-15T23:20:06.633251Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-03-15T23:20:06.633263Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-285481","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"]}
	
	
	==> kernel <==
	 23:24:23 up 14 min,  0 users,  load average: 0.66, 0.49, 0.37
	Linux ha-285481 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d] <==
	I0315 23:21:53.327677       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 23:22:03.691057       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0315 23:22:13.700856       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0315 23:22:14.702477       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 23:22:21.676105       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 23:22:24.747810       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b] <==
	I0315 23:23:44.436084       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:23:54.454152       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:23:54.454304       1 main.go:227] handling current node
	I0315 23:23:54.454338       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:23:54.454367       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:23:54.454675       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:23:54.454746       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:23:54.454869       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:23:54.454905       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:24:04.467854       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:24:04.467962       1 main.go:227] handling current node
	I0315 23:24:04.468008       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:24:04.468034       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:24:04.468274       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:24:04.468314       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:24:04.468408       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:24:04.468436       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:24:14.477036       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:24:14.477127       1 main.go:227] handling current node
	I0315 23:24:14.477154       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:24:14.477172       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:24:14.477310       1 main.go:223] Handling node with IPs: map[192.168.39.248:{}]
	I0315 23:24:14.477330       1 main.go:250] Node ha-285481-m03 has CIDR [10.244.2.0/24] 
	I0315 23:24:14.477418       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:24:14.477442       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a] <==
	I0315 23:22:40.232434       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 23:22:40.232461       1 controller.go:116] Starting legacy_token_tracking_controller
	I0315 23:22:40.232467       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0315 23:22:40.287388       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 23:22:40.290762       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 23:22:40.322571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 23:22:40.354071       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 23:22:40.354299       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 23:22:40.354449       1 aggregator.go:166] initial CRD sync complete...
	I0315 23:22:40.354488       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 23:22:40.354495       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 23:22:40.354501       1 cache.go:39] Caches are synced for autoregister controller
	I0315 23:22:40.419675       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 23:22:40.428146       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 23:22:40.428225       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 23:22:40.428367       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 23:22:40.430256       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 23:22:40.461185       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0315 23:22:40.471253       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.248]
	I0315 23:22:40.472801       1 controller.go:624] quota admission added evaluator for: endpoints
	I0315 23:22:40.489091       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0315 23:22:40.495135       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0315 23:22:41.239538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0315 23:22:41.735789       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.248]
	W0315 23:23:01.731149       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.201 192.168.39.23]
	
	
	==> kube-apiserver [931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b] <==
	I0315 23:21:53.246963       1 options.go:220] external host was not specified, using 192.168.39.23
	I0315 23:21:53.248394       1 server.go:148] Version: v1.28.4
	I0315 23:21:53.248537       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:21:54.219360       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0315 23:21:54.246387       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0315 23:21:54.247570       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0315 23:21:54.248026       1 instance.go:298] Using reconciler: lease
	W0315 23:22:14.215530       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0315 23:22:14.218367       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0315 23:22:14.249802       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a] <==
	I0315 23:21:53.805373       1 serving.go:348] Generated self-signed cert in-memory
	I0315 23:21:54.490131       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 23:21:54.490176       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:21:54.492489       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 23:21:54.492710       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 23:21:54.493035       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 23:21:54.493271       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0315 23:22:15.257141       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.23:8443/healthz\": dial tcp 192.168.39.23:8443: connect: connection refused"
	
	
	==> kube-controller-manager [bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316] <==
	I0315 23:22:52.802577       1 shared_informer.go:318] Caches are synced for resource quota
	I0315 23:22:52.832536       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0315 23:22:52.836096       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0315 23:22:52.837400       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0315 23:22:52.838690       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0315 23:22:52.908212       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-285481-m04"
	I0315 23:22:52.908345       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-285481"
	I0315 23:22:52.908442       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-285481-m02"
	I0315 23:22:52.908951       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-285481-m03"
	I0315 23:22:52.909022       1 event.go:307] "Event occurred" object="ha-285481-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-285481-m03 event: Registered Node ha-285481-m03 in Controller"
	I0315 23:22:52.909041       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-tgxps" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-tgxps"
	I0315 23:22:52.914000       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0315 23:22:53.233138       1 shared_informer.go:318] Caches are synced for garbage collector
	I0315 23:22:53.233234       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0315 23:22:53.235519       1 shared_informer.go:318] Caches are synced for garbage collector
	I0315 23:23:06.273976       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="26.091813ms"
	I0315 23:23:06.275033       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="80.674µs"
	I0315 23:23:24.481955       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="14.895144ms"
	I0315 23:23:24.482278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.797µs"
	I0315 23:23:32.937244       1 event.go:307] "Event occurred" object="ha-285481-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-285481-m04 status is now: NodeNotReady"
	I0315 23:23:32.966709       1 event.go:307] "Event occurred" object="kube-system/kindnet-vzxwb" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:23:32.995496       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-sr2rg" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:23:46.683240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="39.322474ms"
	I0315 23:23:46.683863       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="262.561µs"
	I0315 23:24:15.750330       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-285481-m04"
	
	
	==> kube-proxy [189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3] <==
	E0315 23:22:16.300070       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-285481": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:37.804699       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-285481": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 23:22:37.805219       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0315 23:22:37.847399       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:22:37.847431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:22:37.850239       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:22:37.850496       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:22:37.851074       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:22:37.851121       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:22:37.853223       1 config.go:188] "Starting service config controller"
	I0315 23:22:37.853307       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:22:37.853351       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:22:37.853387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:22:37.854699       1 config.go:315] "Starting node config controller"
	I0315 23:22:37.854738       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0315 23:22:40.875693       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0315 23:22:40.875886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:40.876026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:22:40.876101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:40.876126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:22:40.876210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:40.876237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 23:22:41.755002       1 shared_informer.go:318] Caches are synced for node config
	I0315 23:22:42.354520       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:22:42.354758       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2] <==
	E0315 23:18:45.867558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:45.867338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:45.867611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:52.523430       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:52.523783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:52.523881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:52.523952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:52.523577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:52.524007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:02.763863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:02.764086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:05.835130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:05.835714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:05.835879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:05.835940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:27.339733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:27.339810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:27.339892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:27.339944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:27.339996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:27.340106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:20:01.132525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:20:01.133376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:20:01.133268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:20:01.133586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7] <==
	W0315 23:22:30.818183       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.39.23:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.23:8443: connect: connection refused
	E0315 23:22:30.818255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.39.23:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.39.23:8443: connect: connection refused
	W0315 23:22:31.841808       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: Get "https://192.168.39.23:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.23:8443: connect: connection refused
	E0315 23:22:31.841975       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.39.23:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.39.23:8443: connect: connection refused
	W0315 23:22:40.291050       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 23:22:40.291142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 23:22:40.291220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 23:22:40.291255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 23:22:40.291312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 23:22:40.291320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 23:22:40.291381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 23:22:40.291435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 23:22:40.291587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 23:22:40.291694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 23:22:40.291803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:22:40.291840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 23:22:40.291977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 23:22:40.292024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 23:22:40.292081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 23:22:40.292110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 23:22:40.292162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 23:22:40.292179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 23:22:40.320338       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 23:22:40.320484       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0315 23:23:05.868280       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938] <==
	W0315 23:19:59.785544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 23:19:59.785584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 23:19:59.795895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:19:59.795965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 23:20:00.130896       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 23:20:00.130979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 23:20:00.273877       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 23:20:00.274055       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 23:20:00.403325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 23:20:00.403487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 23:20:00.510982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 23:20:00.511059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 23:20:01.117108       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 23:20:01.117209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 23:20:01.163088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 23:20:01.163189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 23:20:01.557286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 23:20:01.557313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 23:20:05.373484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 23:20:05.373736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 23:20:06.132930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 23:20:06.132953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0315 23:20:06.350176       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 23:20:06.350303       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 23:20:06.350546       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 23:22:45 ha-285481 kubelet[1384]: I0315 23:22:45.401738    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:22:45 ha-285481 kubelet[1384]: E0315 23:22:45.401913    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:22:55 ha-285481 kubelet[1384]: I0315 23:22:55.831743    1384 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox-5b5d89c9d6-klvd7" podStartSLOduration=558.353657587 podCreationTimestamp="2024-03-15 23:13:36 +0000 UTC" firstStartedPulling="2024-03-15 23:13:37.360500204 +0000 UTC m=+178.105305520" lastFinishedPulling="2024-03-15 23:13:38.838425641 +0000 UTC m=+179.583230956" observedRunningTime="2024-03-15 23:13:39.327336253 +0000 UTC m=+180.072141588" watchObservedRunningTime="2024-03-15 23:22:55.831583023 +0000 UTC m=+736.576388357"
	Mar 15 23:22:56 ha-285481 kubelet[1384]: I0315 23:22:56.400746    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:22:56 ha-285481 kubelet[1384]: E0315 23:22:56.401437    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:22:58 ha-285481 kubelet[1384]: I0315 23:22:58.400844    1384 scope.go:117] "RemoveContainer" containerID="64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d"
	Mar 15 23:22:58 ha-285481 kubelet[1384]: E0315 23:22:58.401341    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kindnet-cni\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kindnet-cni pod=kindnet-9fd6f_kube-system(bfce84cd-8517-4081-bd7d-a32f21e4b5ad)\"" pod="kube-system/kindnet-9fd6f" podUID="bfce84cd-8517-4081-bd7d-a32f21e4b5ad"
	Mar 15 23:23:10 ha-285481 kubelet[1384]: I0315 23:23:10.400718    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:23:10 ha-285481 kubelet[1384]: E0315 23:23:10.401243    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:23:13 ha-285481 kubelet[1384]: I0315 23:23:13.400306    1384 scope.go:117] "RemoveContainer" containerID="64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d"
	Mar 15 23:23:21 ha-285481 kubelet[1384]: I0315 23:23:21.400412    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:23:21 ha-285481 kubelet[1384]: E0315 23:23:21.401250    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:23:34 ha-285481 kubelet[1384]: I0315 23:23:34.400763    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:23:34 ha-285481 kubelet[1384]: E0315 23:23:34.401949    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:23:39 ha-285481 kubelet[1384]: E0315 23:23:39.423458    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:23:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:23:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:23:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:23:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:23:48 ha-285481 kubelet[1384]: I0315 23:23:48.400874    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:23:48 ha-285481 kubelet[1384]: E0315 23:23:48.401240    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:24:03 ha-285481 kubelet[1384]: I0315 23:24:03.401115    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:24:03 ha-285481 kubelet[1384]: E0315 23:24:03.403360    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	Mar 15 23:24:17 ha-285481 kubelet[1384]: I0315 23:24:17.400173    1384 scope.go:117] "RemoveContainer" containerID="4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6"
	Mar 15 23:24:17 ha-285481 kubelet[1384]: E0315 23:24:17.400480    1384 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(53d0c1b0-3c5c-443e-a653-9b91407c8792)\"" pod="kube-system/storage-provisioner" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 23:24:22.424220   99260 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17991-75602/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-285481 -n ha-285481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-285481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (381.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (141.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 stop -v=7 --alsologtostderr
E0315 23:25:21.950875   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 stop -v=7 --alsologtostderr: exit status 82 (2m0.483270637s)

                                                
                                                
-- stdout --
	* Stopping node "ha-285481-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:24:42.926937   99645 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:24:42.927219   99645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:24:42.927230   99645 out.go:304] Setting ErrFile to fd 2...
	I0315 23:24:42.927234   99645 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:24:42.927493   99645 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:24:42.927790   99645 out.go:298] Setting JSON to false
	I0315 23:24:42.927900   99645 mustload.go:65] Loading cluster: ha-285481
	I0315 23:24:42.928329   99645 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:24:42.928437   99645 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:24:42.928612   99645 mustload.go:65] Loading cluster: ha-285481
	I0315 23:24:42.928765   99645 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:24:42.928799   99645 stop.go:39] StopHost: ha-285481-m04
	I0315 23:24:42.929270   99645 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:24:42.929318   99645 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:24:42.944086   99645 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36545
	I0315 23:24:42.944541   99645 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:24:42.945172   99645 main.go:141] libmachine: Using API Version  1
	I0315 23:24:42.945206   99645 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:24:42.945595   99645 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:24:42.948377   99645 out.go:177] * Stopping node "ha-285481-m04"  ...
	I0315 23:24:42.950126   99645 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0315 23:24:42.950174   99645 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:24:42.950459   99645 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0315 23:24:42.950491   99645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:24:42.953283   99645 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:24:42.953714   99645 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:24:10 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:24:42.953740   99645 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:24:42.953894   99645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:24:42.954088   99645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:24:42.954245   99645 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:24:42.954383   99645 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	I0315 23:24:43.034380   99645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0315 23:24:43.088234   99645 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0315 23:24:43.141719   99645 main.go:141] libmachine: Stopping "ha-285481-m04"...
	I0315 23:24:43.141774   99645 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:24:43.143418   99645 main.go:141] libmachine: (ha-285481-m04) Calling .Stop
	I0315 23:24:43.147417   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 0/120
	I0315 23:24:44.148670   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 1/120
	I0315 23:24:45.150046   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 2/120
	I0315 23:24:46.151381   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 3/120
	I0315 23:24:47.152601   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 4/120
	I0315 23:24:48.154827   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 5/120
	I0315 23:24:49.156337   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 6/120
	I0315 23:24:50.157609   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 7/120
	I0315 23:24:51.158879   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 8/120
	I0315 23:24:52.160213   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 9/120
	I0315 23:24:53.162358   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 10/120
	I0315 23:24:54.163810   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 11/120
	I0315 23:24:55.165784   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 12/120
	I0315 23:24:56.167200   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 13/120
	I0315 23:24:57.169504   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 14/120
	I0315 23:24:58.171562   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 15/120
	I0315 23:24:59.173827   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 16/120
	I0315 23:25:00.175043   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 17/120
	I0315 23:25:01.176354   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 18/120
	I0315 23:25:02.177652   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 19/120
	I0315 23:25:03.179993   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 20/120
	I0315 23:25:04.181876   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 21/120
	I0315 23:25:05.183172   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 22/120
	I0315 23:25:06.184584   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 23/120
	I0315 23:25:07.186266   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 24/120
	I0315 23:25:08.188231   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 25/120
	I0315 23:25:09.189578   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 26/120
	I0315 23:25:10.191944   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 27/120
	I0315 23:25:11.193760   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 28/120
	I0315 23:25:12.195368   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 29/120
	I0315 23:25:13.197645   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 30/120
	I0315 23:25:14.199126   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 31/120
	I0315 23:25:15.201340   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 32/120
	I0315 23:25:16.202499   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 33/120
	I0315 23:25:17.203869   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 34/120
	I0315 23:25:18.205808   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 35/120
	I0315 23:25:19.207690   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 36/120
	I0315 23:25:20.209582   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 37/120
	I0315 23:25:21.210895   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 38/120
	I0315 23:25:22.212217   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 39/120
	I0315 23:25:23.213838   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 40/120
	I0315 23:25:24.215900   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 41/120
	I0315 23:25:25.217705   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 42/120
	I0315 23:25:26.219197   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 43/120
	I0315 23:25:27.221216   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 44/120
	I0315 23:25:28.223498   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 45/120
	I0315 23:25:29.225968   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 46/120
	I0315 23:25:30.227381   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 47/120
	I0315 23:25:31.228718   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 48/120
	I0315 23:25:32.230659   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 49/120
	I0315 23:25:33.232870   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 50/120
	I0315 23:25:34.234540   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 51/120
	I0315 23:25:35.236635   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 52/120
	I0315 23:25:36.238125   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 53/120
	I0315 23:25:37.239610   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 54/120
	I0315 23:25:38.241274   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 55/120
	I0315 23:25:39.242601   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 56/120
	I0315 23:25:40.244168   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 57/120
	I0315 23:25:41.245927   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 58/120
	I0315 23:25:42.247392   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 59/120
	I0315 23:25:43.249832   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 60/120
	I0315 23:25:44.251070   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 61/120
	I0315 23:25:45.252856   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 62/120
	I0315 23:25:46.255347   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 63/120
	I0315 23:25:47.256587   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 64/120
	I0315 23:25:48.258545   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 65/120
	I0315 23:25:49.260098   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 66/120
	I0315 23:25:50.261831   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 67/120
	I0315 23:25:51.263297   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 68/120
	I0315 23:25:52.264900   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 69/120
	I0315 23:25:53.266858   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 70/120
	I0315 23:25:54.268144   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 71/120
	I0315 23:25:55.269378   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 72/120
	I0315 23:25:56.270555   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 73/120
	I0315 23:25:57.272720   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 74/120
	I0315 23:25:58.274388   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 75/120
	I0315 23:25:59.275707   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 76/120
	I0315 23:26:00.276846   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 77/120
	I0315 23:26:01.278230   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 78/120
	I0315 23:26:02.279472   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 79/120
	I0315 23:26:03.281090   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 80/120
	I0315 23:26:04.283093   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 81/120
	I0315 23:26:05.284523   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 82/120
	I0315 23:26:06.286053   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 83/120
	I0315 23:26:07.287485   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 84/120
	I0315 23:26:08.288867   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 85/120
	I0315 23:26:09.289981   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 86/120
	I0315 23:26:10.291158   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 87/120
	I0315 23:26:11.292436   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 88/120
	I0315 23:26:12.293722   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 89/120
	I0315 23:26:13.296041   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 90/120
	I0315 23:26:14.297306   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 91/120
	I0315 23:26:15.298724   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 92/120
	I0315 23:26:16.300312   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 93/120
	I0315 23:26:17.301445   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 94/120
	I0315 23:26:18.302892   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 95/120
	I0315 23:26:19.304347   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 96/120
	I0315 23:26:20.305661   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 97/120
	I0315 23:26:21.306893   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 98/120
	I0315 23:26:22.308252   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 99/120
	I0315 23:26:23.310156   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 100/120
	I0315 23:26:24.311994   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 101/120
	I0315 23:26:25.313341   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 102/120
	I0315 23:26:26.315207   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 103/120
	I0315 23:26:27.316627   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 104/120
	I0315 23:26:28.318807   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 105/120
	I0315 23:26:29.320143   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 106/120
	I0315 23:26:30.321953   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 107/120
	I0315 23:26:31.323484   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 108/120
	I0315 23:26:32.324840   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 109/120
	I0315 23:26:33.326971   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 110/120
	I0315 23:26:34.328312   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 111/120
	I0315 23:26:35.329787   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 112/120
	I0315 23:26:36.331250   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 113/120
	I0315 23:26:37.332506   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 114/120
	I0315 23:26:38.334685   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 115/120
	I0315 23:26:39.336125   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 116/120
	I0315 23:26:40.337929   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 117/120
	I0315 23:26:41.340419   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 118/120
	I0315 23:26:42.342388   99645 main.go:141] libmachine: (ha-285481-m04) Waiting for machine to stop 119/120
	I0315 23:26:43.343008   99645 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0315 23:26:43.343091   99645 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0315 23:26:43.344624   99645 out.go:177] 
	W0315 23:26:43.345903   99645 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0315 23:26:43.345924   99645 out.go:239] * 
	* 
	W0315 23:26:43.348969   99645 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0315 23:26:43.350541   99645 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-285481 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr: exit status 3 (19.042249484s)

                                                
                                                
-- stdout --
	ha-285481
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-285481-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:26:43.412171   99974 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:26:43.412294   99974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:26:43.412302   99974 out.go:304] Setting ErrFile to fd 2...
	I0315 23:26:43.412306   99974 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:26:43.412954   99974 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:26:43.413318   99974 out.go:298] Setting JSON to false
	I0315 23:26:43.413398   99974 mustload.go:65] Loading cluster: ha-285481
	I0315 23:26:43.413619   99974 notify.go:220] Checking for updates...
	I0315 23:26:43.414385   99974 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:26:43.414441   99974 status.go:255] checking status of ha-285481 ...
	I0315 23:26:43.414958   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.415027   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.437784   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36435
	I0315 23:26:43.438334   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.438998   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.439035   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.439506   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.439747   99974 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:26:43.441646   99974 status.go:330] ha-285481 host status = "Running" (err=<nil>)
	I0315 23:26:43.441671   99974 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:26:43.442113   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.442166   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.456754   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44759
	I0315 23:26:43.457182   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.457689   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.457717   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.458068   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.458277   99974 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:26:43.461135   99974 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:26:43.461617   99974 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:26:43.461642   99974 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:26:43.461774   99974 host.go:66] Checking if "ha-285481" exists ...
	I0315 23:26:43.462078   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.462120   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.476948   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39635
	I0315 23:26:43.477328   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.477787   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.477813   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.478149   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.478337   99974 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:26:43.478575   99974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:26:43.478601   99974 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:26:43.481474   99974 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:26:43.481867   99974 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:26:43.481893   99974 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:26:43.482042   99974 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:26:43.482231   99974 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:26:43.482387   99974 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:26:43.482528   99974 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:26:43.566728   99974 ssh_runner.go:195] Run: systemctl --version
	I0315 23:26:43.574829   99974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:26:43.595947   99974 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:26:43.595982   99974 api_server.go:166] Checking apiserver status ...
	I0315 23:26:43.596038   99974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:26:43.612686   99974 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5605/cgroup
	W0315 23:26:43.623037   99974 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5605/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:26:43.623091   99974 ssh_runner.go:195] Run: ls
	I0315 23:26:43.628288   99974 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:26:43.632920   99974 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:26:43.632943   99974 status.go:422] ha-285481 apiserver status = Running (err=<nil>)
	I0315 23:26:43.632953   99974 status.go:257] ha-285481 status: &{Name:ha-285481 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:26:43.632971   99974 status.go:255] checking status of ha-285481-m02 ...
	I0315 23:26:43.633292   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.633338   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.649269   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38213
	I0315 23:26:43.649751   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.650254   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.650276   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.650638   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.650825   99974 main.go:141] libmachine: (ha-285481-m02) Calling .GetState
	I0315 23:26:43.652660   99974 status.go:330] ha-285481-m02 host status = "Running" (err=<nil>)
	I0315 23:26:43.652682   99974 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:26:43.652972   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.653009   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.667688   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33029
	I0315 23:26:43.668155   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.668666   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.668690   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.669003   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.669188   99974 main.go:141] libmachine: (ha-285481-m02) Calling .GetIP
	I0315 23:26:43.672112   99974 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:26:43.672547   99974 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:22:02 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:26:43.672569   99974 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:26:43.672748   99974 host.go:66] Checking if "ha-285481-m02" exists ...
	I0315 23:26:43.673059   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.673098   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.688568   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35319
	I0315 23:26:43.688990   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.689439   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.689463   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.689801   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.690021   99974 main.go:141] libmachine: (ha-285481-m02) Calling .DriverName
	I0315 23:26:43.690227   99974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:26:43.690253   99974 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHHostname
	I0315 23:26:43.692991   99974 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:26:43.693393   99974 main.go:141] libmachine: (ha-285481-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3a:fc:bf", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:22:02 +0000 UTC Type:0 Mac:52:54:00:3a:fc:bf Iaid: IPaddr:192.168.39.201 Prefix:24 Hostname:ha-285481-m02 Clientid:01:52:54:00:3a:fc:bf}
	I0315 23:26:43.693413   99974 main.go:141] libmachine: (ha-285481-m02) DBG | domain ha-285481-m02 has defined IP address 192.168.39.201 and MAC address 52:54:00:3a:fc:bf in network mk-ha-285481
	I0315 23:26:43.693572   99974 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHPort
	I0315 23:26:43.693753   99974 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHKeyPath
	I0315 23:26:43.693934   99974 main.go:141] libmachine: (ha-285481-m02) Calling .GetSSHUsername
	I0315 23:26:43.694129   99974 sshutil.go:53] new ssh client: &{IP:192.168.39.201 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m02/id_rsa Username:docker}
	I0315 23:26:43.781640   99974 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:26:43.801858   99974 kubeconfig.go:125] found "ha-285481" server: "https://192.168.39.254:8443"
	I0315 23:26:43.801885   99974 api_server.go:166] Checking apiserver status ...
	I0315 23:26:43.801916   99974 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:26:43.818113   99974 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup
	W0315 23:26:43.828452   99974 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1471/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:26:43.828533   99974 ssh_runner.go:195] Run: ls
	I0315 23:26:43.833577   99974 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0315 23:26:43.839878   99974 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0315 23:26:43.839901   99974 status.go:422] ha-285481-m02 apiserver status = Running (err=<nil>)
	I0315 23:26:43.839910   99974 status.go:257] ha-285481-m02 status: &{Name:ha-285481-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:26:43.839933   99974 status.go:255] checking status of ha-285481-m04 ...
	I0315 23:26:43.840266   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.840307   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.856500   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36645
	I0315 23:26:43.856941   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.857508   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.857534   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.857890   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.858152   99974 main.go:141] libmachine: (ha-285481-m04) Calling .GetState
	I0315 23:26:43.859884   99974 status.go:330] ha-285481-m04 host status = "Running" (err=<nil>)
	I0315 23:26:43.859901   99974 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:26:43.860186   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.860225   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.878422   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34149
	I0315 23:26:43.878869   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.879503   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.879528   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.879830   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.880055   99974 main.go:141] libmachine: (ha-285481-m04) Calling .GetIP
	I0315 23:26:43.882967   99974 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:26:43.883455   99974 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:24:10 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:26:43.883489   99974 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:26:43.883653   99974 host.go:66] Checking if "ha-285481-m04" exists ...
	I0315 23:26:43.884046   99974 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:26:43.884085   99974 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:26:43.898786   99974 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40799
	I0315 23:26:43.899288   99974 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:26:43.899827   99974 main.go:141] libmachine: Using API Version  1
	I0315 23:26:43.899852   99974 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:26:43.900177   99974 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:26:43.900376   99974 main.go:141] libmachine: (ha-285481-m04) Calling .DriverName
	I0315 23:26:43.900593   99974 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:26:43.900618   99974 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHHostname
	I0315 23:26:43.903083   99974 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:26:43.903466   99974 main.go:141] libmachine: (ha-285481-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ec:a9:ec", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:24:10 +0000 UTC Type:0 Mac:52:54:00:ec:a9:ec Iaid: IPaddr:192.168.39.115 Prefix:24 Hostname:ha-285481-m04 Clientid:01:52:54:00:ec:a9:ec}
	I0315 23:26:43.903499   99974 main.go:141] libmachine: (ha-285481-m04) DBG | domain ha-285481-m04 has defined IP address 192.168.39.115 and MAC address 52:54:00:ec:a9:ec in network mk-ha-285481
	I0315 23:26:43.903645   99974 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHPort
	I0315 23:26:43.903812   99974 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHKeyPath
	I0315 23:26:43.903967   99974 main.go:141] libmachine: (ha-285481-m04) Calling .GetSSHUsername
	I0315 23:26:43.904147   99974 sshutil.go:53] new ssh client: &{IP:192.168.39.115 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481-m04/id_rsa Username:docker}
	W0315 23:27:02.395520   99974 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.115:22: connect: no route to host
	W0315 23:27:02.395641   99974 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host
	E0315 23:27:02.395672   99974 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host
	I0315 23:27:02.395682   99974 status.go:257] ha-285481-m04 status: &{Name:ha-285481-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0315 23:27:02.395714   99974 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.115:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-285481 -n ha-285481
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-285481 logs -n 25: (1.783493298s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m04 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp testdata/cp-test.txt                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481:/home/docker/cp-test_ha-285481-m04_ha-285481.txt                       |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481 sudo cat                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481.txt                                 |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m02:/home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m02 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m03:/home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n                                                                 | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | ha-285481-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-285481 ssh -n ha-285481-m03 sudo cat                                          | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC | 15 Mar 24 23:14 UTC |
	|         | /home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-285481 node stop m02 -v=7                                                     | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:14 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-285481 node start m02 -v=7                                                    | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:17 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-285481 -v=7                                                           | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-285481 -v=7                                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:18 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-285481 --wait=true -v=7                                                    | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:20 UTC | 15 Mar 24 23:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-285481                                                                | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:24 UTC |                     |
	| node    | ha-285481 node delete m03 -v=7                                                   | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:24 UTC | 15 Mar 24 23:24 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-285481 stop -v=7                                                              | ha-285481 | jenkins | v1.32.0 | 15 Mar 24 23:24 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 23:20:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 23:20:05.473334   98161 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:20:05.473520   98161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:20:05.473538   98161 out.go:304] Setting ErrFile to fd 2...
	I0315 23:20:05.473545   98161 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:20:05.473903   98161 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:20:05.474719   98161 out.go:298] Setting JSON to false
	I0315 23:20:05.476045   98161 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":7355,"bootTime":1710537450,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:20:05.476133   98161 start.go:139] virtualization: kvm guest
	I0315 23:20:05.478845   98161 out.go:177] * [ha-285481] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:20:05.480654   98161 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:20:05.480658   98161 notify.go:220] Checking for updates...
	I0315 23:20:05.481932   98161 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:20:05.483432   98161 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:20:05.484943   98161 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:20:05.486307   98161 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:20:05.487615   98161 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:20:05.489392   98161 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:20:05.489480   98161 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:20:05.489929   98161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:20:05.489970   98161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:20:05.505831   98161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38943
	I0315 23:20:05.506241   98161 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:20:05.506825   98161 main.go:141] libmachine: Using API Version  1
	I0315 23:20:05.506849   98161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:20:05.507209   98161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:20:05.507427   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:20:05.542547   98161 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 23:20:05.544154   98161 start.go:297] selected driver: kvm2
	I0315 23:20:05.544181   98161 start.go:901] validating driver "kvm2" against &{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:20:05.544327   98161 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:20:05.544652   98161 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:20:05.544720   98161 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:20:05.560544   98161 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:20:05.561300   98161 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:20:05.561413   98161 cni.go:84] Creating CNI manager for ""
	I0315 23:20:05.561429   98161 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 23:20:05.561492   98161 start.go:340] cluster config:
	{Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:20:05.561639   98161 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:20:05.563713   98161 out.go:177] * Starting "ha-285481" primary control-plane node in "ha-285481" cluster
	I0315 23:20:05.565167   98161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:20:05.565210   98161 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 23:20:05.565226   98161 cache.go:56] Caching tarball of preloaded images
	I0315 23:20:05.565320   98161 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:20:05.565333   98161 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:20:05.565468   98161 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/config.json ...
	I0315 23:20:05.565691   98161 start.go:360] acquireMachinesLock for ha-285481: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:20:05.565763   98161 start.go:364] duration metric: took 51.025µs to acquireMachinesLock for "ha-285481"
	I0315 23:20:05.565783   98161 start.go:96] Skipping create...Using existing machine configuration
	I0315 23:20:05.565792   98161 fix.go:54] fixHost starting: 
	I0315 23:20:05.566054   98161 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:20:05.566097   98161 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:20:05.580611   98161 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43691
	I0315 23:20:05.581063   98161 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:20:05.581567   98161 main.go:141] libmachine: Using API Version  1
	I0315 23:20:05.581587   98161 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:20:05.581918   98161 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:20:05.582088   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:20:05.582299   98161 main.go:141] libmachine: (ha-285481) Calling .GetState
	I0315 23:20:05.584067   98161 fix.go:112] recreateIfNeeded on ha-285481: state=Running err=<nil>
	W0315 23:20:05.584083   98161 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 23:20:05.586202   98161 out.go:177] * Updating the running kvm2 "ha-285481" VM ...
	I0315 23:20:05.587724   98161 machine.go:94] provisionDockerMachine start ...
	I0315 23:20:05.587741   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:20:05.587924   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.590475   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.590936   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.590964   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.591142   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:05.591342   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.591517   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.591659   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:05.591840   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:05.592010   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:05.592021   98161 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 23:20:05.700908   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481
	
	I0315 23:20:05.700936   98161 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:20:05.701218   98161 buildroot.go:166] provisioning hostname "ha-285481"
	I0315 23:20:05.701253   98161 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:20:05.701460   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.704335   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.704750   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.704776   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.704974   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:05.705204   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.705387   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.705539   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:05.705747   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:05.706019   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:05.706041   98161 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-285481 && echo "ha-285481" | sudo tee /etc/hostname
	I0315 23:20:05.827360   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-285481
	
	I0315 23:20:05.827411   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.830463   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.830920   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.830950   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.831156   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:05.831384   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.831569   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:05.831725   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:05.831929   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:05.832158   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:05.832175   98161 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-285481' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-285481/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-285481' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:20:05.932204   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:20:05.932234   98161 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:20:05.932264   98161 buildroot.go:174] setting up certificates
	I0315 23:20:05.932273   98161 provision.go:84] configureAuth start
	I0315 23:20:05.932281   98161 main.go:141] libmachine: (ha-285481) Calling .GetMachineName
	I0315 23:20:05.932657   98161 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:20:05.935436   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.935846   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.935873   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.936016   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:05.938456   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.938768   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:05.938806   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:05.938908   98161 provision.go:143] copyHostCerts
	I0315 23:20:05.938948   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:20:05.939012   98161 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:20:05.939023   98161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:20:05.939105   98161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:20:05.939233   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:20:05.939260   98161 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:20:05.939269   98161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:20:05.939308   98161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:20:05.939401   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:20:05.939424   98161 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:20:05.939434   98161 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:20:05.939468   98161 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:20:05.939544   98161 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.ha-285481 san=[127.0.0.1 192.168.39.23 ha-285481 localhost minikube]
	I0315 23:20:06.044007   98161 provision.go:177] copyRemoteCerts
	I0315 23:20:06.044102   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:20:06.044132   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:06.047154   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.047563   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:06.047587   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.047771   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:06.047971   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:06.048209   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:06.048386   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:20:06.134348   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:20:06.134433   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0315 23:20:06.164481   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:20:06.164572   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0315 23:20:06.194126   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:20:06.194232   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:20:06.223304   98161 provision.go:87] duration metric: took 291.013841ms to configureAuth
	I0315 23:20:06.223345   98161 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:20:06.223649   98161 config.go:182] Loaded profile config "ha-285481": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:20:06.223739   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:20:06.226888   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.227367   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:20:06.227397   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:20:06.227677   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:20:06.227847   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:06.228033   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:20:06.228181   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:20:06.228358   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:20:06.228551   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:20:06.228572   98161 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:21:37.081788   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:21:37.081884   98161 machine.go:97] duration metric: took 1m31.494128507s to provisionDockerMachine
	I0315 23:21:37.081902   98161 start.go:293] postStartSetup for "ha-285481" (driver="kvm2")
	I0315 23:21:37.081974   98161 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:21:37.082005   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.082374   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:21:37.082404   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.085889   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.086457   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.086488   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.086665   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.086887   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.087026   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.087174   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:21:37.172388   98161 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:21:37.177380   98161 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:21:37.177409   98161 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:21:37.177487   98161 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:21:37.177561   98161 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:21:37.177592   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:21:37.177673   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:21:37.188641   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:21:37.218040   98161 start.go:296] duration metric: took 136.119484ms for postStartSetup
	I0315 23:21:37.218110   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.218448   98161 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0315 23:21:37.218477   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.221520   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.221973   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.222003   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.222069   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.222266   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.222443   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.222680   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	W0315 23:21:37.302825   98161 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0315 23:21:37.302851   98161 fix.go:56] duration metric: took 1m31.737061421s for fixHost
	I0315 23:21:37.302875   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.305457   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.305843   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.305871   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.306039   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.306269   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.306438   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.306567   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.306755   98161 main.go:141] libmachine: Using SSH client type: native
	I0315 23:21:37.306933   98161 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.23 22 <nil> <nil>}
	I0315 23:21:37.306944   98161 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:21:37.408903   98161 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710544897.375688930
	
	I0315 23:21:37.408929   98161 fix.go:216] guest clock: 1710544897.375688930
	I0315 23:21:37.408936   98161 fix.go:229] Guest: 2024-03-15 23:21:37.37568893 +0000 UTC Remote: 2024-03-15 23:21:37.302859814 +0000 UTC m=+91.892073006 (delta=72.829116ms)
	I0315 23:21:37.408965   98161 fix.go:200] guest clock delta is within tolerance: 72.829116ms
	I0315 23:21:37.408971   98161 start.go:83] releasing machines lock for "ha-285481", held for 1m31.843195787s
	I0315 23:21:37.408989   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.409274   98161 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:21:37.412007   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.412387   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.412436   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.412551   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.413180   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.413403   98161 main.go:141] libmachine: (ha-285481) Calling .DriverName
	I0315 23:21:37.413528   98161 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:21:37.413595   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.413620   98161 ssh_runner.go:195] Run: cat /version.json
	I0315 23:21:37.413641   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHHostname
	I0315 23:21:37.416298   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.416564   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.416627   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.416653   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.416782   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.417015   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.417090   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:37.417119   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:37.417172   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.417233   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHPort
	I0315 23:21:37.417661   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:21:37.417693   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHKeyPath
	I0315 23:21:37.417885   98161 main.go:141] libmachine: (ha-285481) Calling .GetSSHUsername
	I0315 23:21:37.418072   98161 sshutil.go:53] new ssh client: &{IP:192.168.39.23 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/ha-285481/id_rsa Username:docker}
	I0315 23:21:37.493035   98161 ssh_runner.go:195] Run: systemctl --version
	I0315 23:21:37.521636   98161 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:21:37.688419   98161 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0315 23:21:37.701191   98161 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:21:37.701269   98161 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:21:37.712058   98161 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 23:21:37.712084   98161 start.go:494] detecting cgroup driver to use...
	I0315 23:21:37.712144   98161 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:21:37.730725   98161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:21:37.745988   98161 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:21:37.746046   98161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:21:37.761556   98161 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:21:37.776923   98161 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:21:37.950845   98161 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:21:38.129846   98161 docker.go:233] disabling docker service ...
	I0315 23:21:38.129914   98161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:21:38.157939   98161 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:21:38.209719   98161 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:21:38.484160   98161 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:21:38.723303   98161 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:21:38.755250   98161 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:21:38.789927   98161 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:21:38.790042   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.810926   98161 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:21:38.811014   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.823490   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.839713   98161 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:21:38.851209   98161 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:21:38.863308   98161 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:21:38.873879   98161 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:21:38.884928   98161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:21:39.038018   98161 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:21:49.211714   98161 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.173636255s)
	I0315 23:21:49.211746   98161 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:21:49.211812   98161 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:21:49.217287   98161 start.go:562] Will wait 60s for crictl version
	I0315 23:21:49.217346   98161 ssh_runner.go:195] Run: which crictl
	I0315 23:21:49.221613   98161 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:21:49.261929   98161 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:21:49.262023   98161 ssh_runner.go:195] Run: crio --version
	I0315 23:21:49.293216   98161 ssh_runner.go:195] Run: crio --version
	I0315 23:21:49.325628   98161 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:21:49.327131   98161 main.go:141] libmachine: (ha-285481) Calling .GetIP
	I0315 23:21:49.329904   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:49.330323   98161 main.go:141] libmachine: (ha-285481) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b7:7a:0e", ip: ""} in network mk-ha-285481: {Iface:virbr1 ExpiryTime:2024-03-16 00:10:10 +0000 UTC Type:0 Mac:52:54:00:b7:7a:0e Iaid: IPaddr:192.168.39.23 Prefix:24 Hostname:ha-285481 Clientid:01:52:54:00:b7:7a:0e}
	I0315 23:21:49.330351   98161 main.go:141] libmachine: (ha-285481) DBG | domain ha-285481 has defined IP address 192.168.39.23 and MAC address 52:54:00:b7:7a:0e in network mk-ha-285481
	I0315 23:21:49.330546   98161 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:21:49.335523   98161 kubeadm.go:877] updating cluster {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 23:21:49.335667   98161 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:21:49.335713   98161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:21:49.388250   98161 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:21:49.388278   98161 crio.go:415] Images already preloaded, skipping extraction
	I0315 23:21:49.388340   98161 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:21:49.424625   98161 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:21:49.424660   98161 cache_images.go:84] Images are preloaded, skipping loading
	I0315 23:21:49.424695   98161 kubeadm.go:928] updating node { 192.168.39.23 8443 v1.28.4 crio true true} ...
	I0315 23:21:49.424826   98161 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-285481 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.23
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:21:49.424910   98161 ssh_runner.go:195] Run: crio config
	I0315 23:21:49.473284   98161 cni.go:84] Creating CNI manager for ""
	I0315 23:21:49.473312   98161 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0315 23:21:49.473326   98161 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 23:21:49.473354   98161 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.23 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-285481 NodeName:ha-285481 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.23"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.23 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 23:21:49.473515   98161 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.23
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-285481"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.23
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.23"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 23:21:49.473542   98161 kube-vip.go:111] generating kube-vip config ...
	I0315 23:21:49.473592   98161 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0315 23:21:49.486548   98161 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0315 23:21:49.486676   98161 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0315 23:21:49.486736   98161 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:21:49.497506   98161 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 23:21:49.497596   98161 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0315 23:21:49.508513   98161 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0315 23:21:49.526819   98161 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:21:49.544723   98161 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0315 23:21:49.562193   98161 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0315 23:21:49.579487   98161 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0315 23:21:49.584268   98161 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:21:49.739015   98161 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:21:49.754696   98161 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481 for IP: 192.168.39.23
	I0315 23:21:49.754720   98161 certs.go:194] generating shared ca certs ...
	I0315 23:21:49.754764   98161 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:21:49.754971   98161 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:21:49.755019   98161 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:21:49.755031   98161 certs.go:256] generating profile certs ...
	I0315 23:21:49.755136   98161 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/client.key
	I0315 23:21:49.755168   98161 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a
	I0315 23:21:49.755194   98161 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.23 192.168.39.201 192.168.39.248 192.168.39.254]
	I0315 23:21:49.833761   98161 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a ...
	I0315 23:21:49.833801   98161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a: {Name:mka8dc5e0a5c882cbf8137eb9dd03f3b19698962 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:21:49.834026   98161 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a ...
	I0315 23:21:49.834046   98161 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a: {Name:mk9a4f673a35e8084eec2bb643e2d38493c92b6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:21:49.834153   98161 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt.5e2c949a -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt
	I0315 23:21:49.834314   98161 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key.5e2c949a -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key
	I0315 23:21:49.834483   98161 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key
	I0315 23:21:49.834503   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:21:49.834518   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:21:49.834542   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:21:49.834564   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:21:49.834580   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:21:49.834595   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:21:49.834612   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:21:49.834626   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:21:49.834684   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:21:49.834725   98161 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:21:49.834738   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:21:49.834763   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:21:49.834792   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:21:49.834812   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:21:49.834854   98161 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:21:49.834887   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:21:49.834910   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:49.834924   98161 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:21:49.835648   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:21:49.864340   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:21:49.892431   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:21:49.918794   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:21:49.945988   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0315 23:21:49.971706   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:21:49.998672   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:21:50.025893   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/ha-285481/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:21:50.051995   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:21:50.079274   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:21:50.107872   98161 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:21:50.136261   98161 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 23:21:50.154686   98161 ssh_runner.go:195] Run: openssl version
	I0315 23:21:50.161740   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:21:50.174898   98161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:21:50.180368   98161 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:21:50.180465   98161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:21:50.186848   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:21:50.197192   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:21:50.209246   98161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:21:50.214716   98161 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:21:50.214786   98161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:21:50.227111   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:21:50.242928   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:21:50.255039   98161 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:50.259947   98161 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:50.260030   98161 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:21:50.266329   98161 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:21:50.276426   98161 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:21:50.281225   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 23:21:50.287203   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 23:21:50.293325   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 23:21:50.299543   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 23:21:50.305761   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 23:21:50.311986   98161 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 23:21:50.318207   98161 kubeadm.go:391] StartCluster: {Name:ha-285481 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-285481 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.23 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.201 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.248 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.115 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:21:50.318360   98161 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 23:21:50.318494   98161 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 23:21:50.359937   98161 cri.go:89] found id: "c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00"
	I0315 23:21:50.359966   98161 cri.go:89] found id: "f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d"
	I0315 23:21:50.359973   98161 cri.go:89] found id: "9d334bd492573c9230531332ad4ebbf1608dda09f4fdb76d29eedbf1830fb84c"
	I0315 23:21:50.359978   98161 cri.go:89] found id: "61e89e5375f385fd70fe2392aab4c6a216de1586dc519260fdc0a649ab724d90"
	I0315 23:21:50.359982   98161 cri.go:89] found id: "a6d28b03cb917d55f42fba769ea3d6b48a0e57137c6870c4d878c4edd3e812a6"
	I0315 23:21:50.359987   98161 cri.go:89] found id: "b450101b891fe9ce8fa24f56acdf7c4b48513c4f86b9760e54807cf4d2f8a42d"
	I0315 23:21:50.359990   98161 cri.go:89] found id: "4b9eb5f63654c989746462193024e76ae0e8c941899ac5f4c5f7ed7a25755404"
	I0315 23:21:50.359993   98161 cri.go:89] found id: "706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb"
	I0315 23:21:50.359995   98161 cri.go:89] found id: "213c94783e488f2ebd013a27587d7fb2436e9f4c8e18f80aa658aad883b49ad1"
	I0315 23:21:50.360007   98161 cri.go:89] found id: "3f54e9bdd61455d194da594c57a801f583209d258799c2cce4ef95596e145b53"
	I0315 23:21:50.360010   98161 cri.go:89] found id: "46eabb63fd66fc5a4d82d7292a73b68ff95075507fae0afe3927148d6787a316"
	I0315 23:21:50.360013   98161 cri.go:89] found id: "e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2"
	I0315 23:21:50.360022   98161 cri.go:89] found id: "bc2a1703be0eff4be1f81992cd489298e2516aabbf2a31195ab13c27c2a997db"
	I0315 23:21:50.360024   98161 cri.go:89] found id: "b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938"
	I0315 23:21:50.360028   98161 cri.go:89] found id: "122f4a81c61ffa1a6323a763bcffe975a026ecc276344287213a8f6f75e8b079"
	I0315 23:21:50.360031   98161 cri.go:89] found id: "a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca"
	I0315 23:21:50.360033   98161 cri.go:89] found id: ""
	I0315 23:21:50.360083   98161 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.022254121Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545223022224061,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66ea6c8a-ee45-4436-9431-31d3b12458fb name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.023165880Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f85dade6-3688-40c3-a589-dfc1238d3409 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.023247579Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f85dade6-3688-40c3-a589-dfc1238d3409 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.023770798Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c6cfb528fbb597235698fb87057894e776dde41bd3a8a0eb056235923bdae49,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710545093420682891,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.h
ash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kub
ernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b
7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54
562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f
79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a28
99304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXIT
ED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544232506307460
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f85dade6-3688-40c3-a589-dfc1238d3409 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.068669368Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=586f1cd6-63c7-425c-87d5-913e287ce4cf name=/runtime.v1.RuntimeService/Version
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.068748873Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=586f1cd6-63c7-425c-87d5-913e287ce4cf name=/runtime.v1.RuntimeService/Version
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.069788509Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09db6abc-3264-435e-93c4-cf1cf1ba37e2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.070250162Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545223070227756,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09db6abc-3264-435e-93c4-cf1cf1ba37e2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.070759350Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8073efee-85d4-4752-8783-ae51791ea40b name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.070851831Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8073efee-85d4-4752-8783-ae51791ea40b name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.071261188Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c6cfb528fbb597235698fb87057894e776dde41bd3a8a0eb056235923bdae49,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710545093420682891,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.h
ash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kub
ernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b
7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54
562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f
79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a28
99304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXIT
ED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544232506307460
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8073efee-85d4-4752-8783-ae51791ea40b name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.122009495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a6e4ff95-2dc4-4abe-a324-352600816fd3 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.122102888Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a6e4ff95-2dc4-4abe-a324-352600816fd3 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.131504833Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2c4431ca-56cc-46be-9c02-70a3a65abeee name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.132176097Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545223132148200,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2c4431ca-56cc-46be-9c02-70a3a65abeee name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.132852475Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1aacf61b-83fb-41cd-88a9-7532c0d65ee4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.132911237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1aacf61b-83fb-41cd-88a9-7532c0d65ee4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.133353935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c6cfb528fbb597235698fb87057894e776dde41bd3a8a0eb056235923bdae49,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710545093420682891,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.h
ash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kub
ernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b
7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54
562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f
79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a28
99304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXIT
ED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544232506307460
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1aacf61b-83fb-41cd-88a9-7532c0d65ee4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.181810441Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3479b6e1-879c-45ad-9001-b1d7d287bc0f name=/runtime.v1.RuntimeService/Version
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.181884079Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3479b6e1-879c-45ad-9001-b1d7d287bc0f name=/runtime.v1.RuntimeService/Version
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.183523593Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f1e48c89-c8cb-49f5-85c4-2a0e86763297 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.184177692Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710545223184151122,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f1e48c89-c8cb-49f5-85c4-2a0e86763297 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.184822298Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b0194395-7b19-497c-b286-5f16a35b124a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.184897630Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b0194395-7b19-497c-b286-5f16a35b124a name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:27:03 ha-285481 crio[4345]: time="2024-03-15 23:27:03.185372478Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8c6cfb528fbb597235698fb87057894e776dde41bd3a8a0eb056235923bdae49,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:6,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710545093420682891,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.container.hash: 9b8588f7,io.kubernetes.container.restartCount: 6,io.kubernetes.container.t
erminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:4,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710544993416552680,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710544955421412724,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710544952413231043,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{io.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cd54d913bb9717d3cd51cb279e5fce526a573cdc03efea06be0ea4ff075d781,PodSandboxId:7045b908ab1b8859b75d8ec6f5e6e7ef1f441c29120db09de5a904acb5ae599c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710544945719069128,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessag
ePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518,PodSandboxId:d07c930b8ebca9bf552306163a1e1f47c3c5fc4d1ebc68c76be4137fb965a49e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544929412847801,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\
"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2,PodSandboxId:db6cf9923e6bd3ed056401770bbce33549941a3fbc08be170745c1c402fa194e,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710544927427897681,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.h
ash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3,PodSandboxId:fcd137b8d3b9fb2f3f6684a40285e090e326579e151a5e12f582ba6a5e1a7695,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710544912807457594,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kub
ernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991,PodSandboxId:8317b128a7c91602b405b0eec597fceb8ee3dc3d8b69bfa1af6417a1c9479ad7,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710544912644357744,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernete
s.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d,PodSandboxId:a8569823c9966852438c7cd6aeac430dfc9b064ad70921638e718ad97ffc5025,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710544912789261201,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9fd6f,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: bfce84cd-8517-4081-bd7d-a32f21e4b5ad,},Annotations:map[string]string{io.kubernetes.container.hash: d270bdbf,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:809dd4d909af8d4dacb581ba8802c8b175e770c0f1868b95a2a0ec42ccb55646,PodSandboxId:5a920e208332e160f3a1e2a4ca1ea444b87201b53abe9fb874b27b3be9a85a6a,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:3,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_RUNNING,CreatedAt:1710544912542242648,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b
7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7,PodSandboxId:aa73d4571ec499d41a3874b6e0450345238dc689c27f982bab504e3fc59e988d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710544912537947615,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotatio
ns:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b,PodSandboxId:ca198fa2b4b26402b766581c876c4697fb0a074e000911a112a72e10184135ca,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710544912557782681,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e92a25be95f613e9d58033ce1d8356e2,},Annotations:map[string]string{i
o.kubernetes.container.hash: 44cb10e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a6e055fb6555e26576b1c6cc1276c67a2c03e9cf2a98be00990ebc7a7b621c6,PodSandboxId:87554e8de736bc553f45cb053abd41ef348345c7ae646eec40f41bfb04b4e6da,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:5,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710544912228545847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 53d0c1b0-3c5c-443e-a653-9b91407c8792,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 9b8588f7,io.kubernetes.container.restartCount: 5,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a,PodSandboxId:b6941d02e47f9e500407172746a9982835d57c79de15584c1b6a9b4dfa54f499,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710544912410490727,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cc53b311fa710b50aec0d03c48913785,},Annotations:map[string]string{io.kuberne
tes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00,PodSandboxId:6689feff0da96023a484809b5adab74f0a7cd2efb82511276d88b13ead69ba39,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898619218805,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qxtp4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f713da8e-df53-4299-9b3c-8390bc69a077,},Annotations:map[string]string{io.kubernetes.container.hash: aae13aba,io.
kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d,PodSandboxId:68895f60fb2d2c20eb372845aa9aac995c924a04651f61f44dcfd8d219f499df,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710544898562977761,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredn
s-5dd5756b68-9c44k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 52fce3d0-cba2-4f0e-9a6c-8d4c082ff93e,},Annotations:map[string]string{io.kubernetes.container.hash: 9206c5b4,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:706f8e951e3f04c8870428e985a804b90befd8248f786ba5030630c97cbbdffb,PodSandboxId:0a7887e08f4552b6e6bcf963a70bed86d6baa449e939a97fc9beb09f1bd67e64,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:2,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54
562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710544709418924989,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 68cd56291cafdc1f4200c6b7b80e7314,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7e21f8e6f17875c270dbc3888ca3e491de226924d5a9a5a869c206b21b780f41,PodSandboxId:8857e9f8aa4477e6f8c7d56ecc9bcb70fcd9165937625a95fe7d1aeb9fffe1fc,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f
79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710544418860005807,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-klvd7,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fce71bb2-0072-40ff-88b2-fa91d9ca758f,},Annotations:map[string]string{io.kubernetes.container.hash: 846b902a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2,PodSandboxId:5404a98a681eaf83cc09e6f73cc50d328d67e53ead1bef9c7d05a5a932f39580,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a28
99304398e,State:CONTAINER_EXITED,CreatedAt:1710544251853165355,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-cml9m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1b0719f-96b2-4671-b09c-583b2c04595e,},Annotations:map[string]string{io.kubernetes.container.hash: fbb29d8a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938,PodSandboxId:9a7f75d9143823c48b06083a775e37a3a69f4598989d81f02118ebeb3771f3fc,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXIT
ED,CreatedAt:1710544232586409390,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2c2e1f7684a84d107f4d7ec466a95cbe,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca,PodSandboxId:8e777ceb1c377b70b3d17790ecaaa00510b276a577b921d6ab50ca60101dc3f8,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710544232506307460
,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-285481,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3c169e1568285c93d8c5129bb35547b,},Annotations:map[string]string{io.kubernetes.container.hash: e8dfce0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b0194395-7b19-497c-b286-5f16a35b124a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8c6cfb528fbb5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       6                   87554e8de736b       storage-provisioner
	7aa0725072636       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               4                   a8569823c9966       kindnet-9fd6f
	bddf2a55a13f4       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   2                   b6941d02e47f9       kube-controller-manager-ha-285481
	2dae1e7bd9a21       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   ca198fa2b4b26       kube-apiserver-ha-285481
	7cd54d913bb97       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      4 minutes ago       Running             busybox                   1                   7045b908ab1b8       busybox-5b5d89c9d6-klvd7
	ca86ee48e13d9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      4 minutes ago       Running             coredns                   2                   d07c930b8ebca       coredns-5dd5756b68-qxtp4
	915cb8e3716e7       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      4 minutes ago       Running             coredns                   2                   db6cf9923e6bd       coredns-5dd5756b68-9c44k
	189cc226d08d1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   fcd137b8d3b9f       kube-proxy-cml9m
	64c336a92304b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               3                   a8569823c9966       kindnet-9fd6f
	a32853d47c1c9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   8317b128a7c91       etcd-ha-285481
	931143ffbabb9       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   ca198fa2b4b26       kube-apiserver-ha-285481
	809dd4d909af8       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Running             kube-vip                  3                   5a920e208332e       kube-vip-ha-285481
	97929629ecc64       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   aa73d4571ec49       kube-scheduler-ha-285481
	803b9cd3104df       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   1                   b6941d02e47f9       kube-controller-manager-ha-285481
	4a6e055fb6555       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       5                   87554e8de736b       storage-provisioner
	c09a826d051f4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Exited              coredns                   1                   6689feff0da96       coredns-5dd5756b68-qxtp4
	f160f73a45516       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Exited              coredns                   1                   68895f60fb2d2       coredns-5dd5756b68-9c44k
	706f8e951e3f0       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      8 minutes ago       Exited              kube-vip                  2                   0a7887e08f455       kube-vip-ha-285481
	7e21f8e6f1787       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   13 minutes ago      Exited              busybox                   0                   8857e9f8aa447       busybox-5b5d89c9d6-klvd7
	e7c7732963470       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   5404a98a681ea       kube-proxy-cml9m
	b1799ad1e14d3       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      16 minutes ago      Exited              kube-scheduler            0                   9a7f75d914382       kube-scheduler-ha-285481
	a6eaa3307ddf1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      16 minutes ago      Exited              etcd                      0                   8e777ceb1c377       etcd-ha-285481
	
	
	==> coredns [915cb8e3716e7209e4517aa38b03facf058d399d3c83386c4ceadc824de088c2] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59945 - 7882 "HINFO IN 146871251892497703.7499368807378574144. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.021073832s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [c09a826d051f475374ca2e40d6cc678edbccc5a7d3b25a5ce79e21e661311e00] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:42795 - 12711 "HINFO IN 2521867098849721333.7657461000445057215. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.018792575s
	
	
	==> coredns [ca86ee48e13d9f453de46deec0278c2a18b5e8099135d7644d71ef1dffe6b518] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39952 - 23738 "HINFO IN 3992818473559059300.5048460502519144909. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021732356s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	
	
	==> coredns [f160f73a455161a71db393b74454487a5d459569d637045e7f17fd8e2a8ee32d] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:46860 - 14168 "HINFO IN 6553267951195986899.7024224011946022153. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020673615s
	
	
	==> describe nodes <==
	Name:               ha-285481
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T23_10_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:10:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:26:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:25:42 +0000   Fri, 15 Mar 2024 23:25:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:25:42 +0000   Fri, 15 Mar 2024 23:25:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:25:42 +0000   Fri, 15 Mar 2024 23:25:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:25:42 +0000   Fri, 15 Mar 2024 23:25:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.23
	  Hostname:    ha-285481
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7afae64232d041e98363d899e90f24b0
	  System UUID:                7afae642-32d0-41e9-8363-d899e90f24b0
	  Boot ID:                    ac63bdb2-abe3-40ea-a654-ca3224dec308
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-klvd7             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-9c44k             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-qxtp4             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-285481                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-9fd6f                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-285481             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-285481    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-cml9m                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-285481             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-285481                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m25s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           16m                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Warning  ContainerGCFailed        5m24s (x2 over 6m24s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-285481 event: Registered Node ha-285481 in Controller
	  Normal   NodeNotReady             100s                   node-controller  Node ha-285481 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     81s (x2 over 16m)      kubelet          Node ha-285481 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    81s (x2 over 16m)      kubelet          Node ha-285481 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  81s (x2 over 16m)      kubelet          Node ha-285481 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                81s (x2 over 16m)      kubelet          Node ha-285481 status is now: NodeReady
	
	
	Name:               ha-285481-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_12_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:11:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:23:23 +0000   Fri, 15 Mar 2024 23:22:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.201
	  Hostname:    ha-285481-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7f269fbf2ace479a8b9438486949ceb1
	  System UUID:                7f269fbf-2ace-479a-8b94-38486949ceb1
	  Boot ID:                    9b2251a9-b538-488f-a580-06286d5f2e17
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-tgxps                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 etcd-ha-285481-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-pnxpk                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-285481-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-285481-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-2hcgt                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-285481-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-285481-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 14m                    kube-proxy       
	  Normal  Starting                 4m                     kube-proxy       
	  Normal  RegisteredNode           14m                    node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode           13m                    node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  NodeNotReady             11m                    node-controller  Node ha-285481-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    4m49s (x8 over 4m49s)  kubelet          Node ha-285481-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 4m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m49s (x8 over 4m49s)  kubelet          Node ha-285481-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     4m49s (x7 over 4m49s)  kubelet          Node ha-285481-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m11s                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode           4m10s                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-285481-m02 event: Registered Node ha-285481-m02 in Controller
	
	
	Name:               ha-285481-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-285481-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=ha-285481
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_14_18_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:14:17 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-285481-m04
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:24:36 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:25:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:25:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:25:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 23:24:15 +0000   Fri, 15 Mar 2024 23:25:18 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.115
	  Hostname:    ha-285481-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d8e447d79a3745579ec32c4638493b56
	  System UUID:                d8e447d7-9a37-4557-9ec3-2c4638493b56
	  Boot ID:                    584eb9ad-42a9-4678-9ac1-7e878b8d2dbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-lrhp9    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m38s
	  kube-system                 kindnet-vzxwb               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-sr2rg            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m44s                  kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)      kubelet          Node ha-285481-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)      kubelet          Node ha-285481-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)      kubelet          Node ha-285481-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m                    node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   NodeReady                12m                    kubelet          Node ha-285481-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m11s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           4m10s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   RegisteredNode           3m11s                  node-controller  Node ha-285481-m04 event: Registered Node ha-285481-m04 in Controller
	  Normal   NodeHasNoDiskPressure    2m48s (x2 over 2m48s)  kubelet          Node ha-285481-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  2m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m48s (x2 over 2m48s)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientMemory
	  Normal   Starting                 2m48s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     2m48s (x2 over 2m48s)  kubelet          Node ha-285481-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m48s                  kubelet          Node ha-285481-m04 has been rebooted, boot id: 584eb9ad-42a9-4678-9ac1-7e878b8d2dbf
	  Normal   NodeReady                2m48s                  kubelet          Node ha-285481-m04 status is now: NodeReady
	  Normal   NodeNotReady             105s (x2 over 3m31s)   node-controller  Node ha-285481-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +9.660439] systemd-fstab-generator[603]: Ignoring "noauto" option for root device
	[  +0.056188] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075855] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.154877] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.138534] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.233962] systemd-fstab-generator[667]: Ignoring "noauto" option for root device
	[  +4.832504] systemd-fstab-generator[773]: Ignoring "noauto" option for root device
	[  +0.064584] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.460453] systemd-fstab-generator[956]: Ignoring "noauto" option for root device
	[  +0.636379] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.224199] systemd-fstab-generator[1377]: Ignoring "noauto" option for root device
	[  +0.090626] kauditd_printk_skb: 51 callbacks suppressed
	[ +12.490228] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.030141] kauditd_printk_skb: 53 callbacks suppressed
	[Mar15 23:11] kauditd_printk_skb: 11 callbacks suppressed
	[Mar15 23:18] kauditd_printk_skb: 1 callbacks suppressed
	[Mar15 23:21] systemd-fstab-generator[4051]: Ignoring "noauto" option for root device
	[  +0.177892] systemd-fstab-generator[4063]: Ignoring "noauto" option for root device
	[  +0.307408] systemd-fstab-generator[4128]: Ignoring "noauto" option for root device
	[  +0.241200] systemd-fstab-generator[4195]: Ignoring "noauto" option for root device
	[  +0.365007] systemd-fstab-generator[4324]: Ignoring "noauto" option for root device
	[ +10.693841] systemd-fstab-generator[4479]: Ignoring "noauto" option for root device
	[  +0.099959] kauditd_printk_skb: 120 callbacks suppressed
	[Mar15 23:22] kauditd_printk_skb: 108 callbacks suppressed
	[ +28.844834] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [a32853d47c1c94fb0d8313b61e063ebc878c491a6c04f6ea877273711e62a991] <==
	{"level":"info","ts":"2024-03-15T23:23:30.649773Z","caller":"traceutil/trace.go:171","msg":"trace[653062951] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:4; response_revision:2194; }","duration":"200.717278ms","start":"2024-03-15T23:23:30.449024Z","end":"2024-03-15T23:23:30.649741Z","steps":["trace[653062951] 'range keys from in-memory index tree'  (duration: 199.314742ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:23:31.393922Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.401474Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c6baa4636f442c95","to":"de3c7347423bf565","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-15T23:23:31.401589Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.407Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.407524Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:23:31.409911Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"c6baa4636f442c95","to":"de3c7347423bf565","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-15T23:23:31.410099Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:24:28.947285Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c6baa4636f442c95 switched to configuration voters=(7501810986888075948 14319938712153369749)"}
	{"level":"info","ts":"2024-03-15T23:24:28.947521Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"7d4cc2b8d7236707","local-member-id":"c6baa4636f442c95","removed-remote-peer-id":"de3c7347423bf565","removed-remote-peer-urls":["https://192.168.39.248:2380"]}
	{"level":"info","ts":"2024-03-15T23:24:28.947583Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"de3c7347423bf565"}
	{"level":"warn","ts":"2024-03-15T23:24:28.947927Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:24:28.947992Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de3c7347423bf565"}
	{"level":"warn","ts":"2024-03-15T23:24:28.948116Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:24:28.948148Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:24:28.948376Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"warn","ts":"2024-03-15T23:24:28.948613Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565","error":"context canceled"}
	{"level":"warn","ts":"2024-03-15T23:24:28.948731Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"de3c7347423bf565","error":"failed to read de3c7347423bf565 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-15T23:24:28.94878Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"warn","ts":"2024-03-15T23:24:28.948986Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565","error":"context canceled"}
	{"level":"info","ts":"2024-03-15T23:24:28.949026Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:24:28.949047Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:24:28.949061Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"c6baa4636f442c95","removed-remote-peer-id":"de3c7347423bf565"}
	{"level":"warn","ts":"2024-03-15T23:24:28.962953Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c6baa4636f442c95","remote-peer-id-stream-handler":"c6baa4636f442c95","remote-peer-id-from":"de3c7347423bf565"}
	{"level":"warn","ts":"2024-03-15T23:24:28.965611Z","caller":"rafthttp/http.go:394","msg":"rejected stream from remote peer because it was removed","local-member-id":"c6baa4636f442c95","remote-peer-id-stream-handler":"c6baa4636f442c95","remote-peer-id-from":"de3c7347423bf565"}
	
	
	==> etcd [a6eaa3307ddf13ea0305da7b959a43a3caa95e047a96e8602a08c3f72377cfca] <==
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	WARNING: 2024/03/15 23:20:06 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-15T23:20:06.628082Z","caller":"etcdserver/v3_server.go:897","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":3212630333545603566,"retry-timeout":"500ms"}
	{"level":"warn","ts":"2024-03-15T23:20:06.629359Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:20:06.629394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.23:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T23:20:06.629465Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"c6baa4636f442c95","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-15T23:20:06.629678Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.629734Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.629862Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.629982Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.630054Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.630126Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.63014Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"681bc958a5961aac"}
	{"level":"info","ts":"2024-03-15T23:20:06.630145Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630153Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630179Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.63025Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630298Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.63033Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"c6baa4636f442c95","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.630361Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"de3c7347423bf565"}
	{"level":"info","ts":"2024-03-15T23:20:06.633146Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-03-15T23:20:06.633251Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.23:2380"}
	{"level":"info","ts":"2024-03-15T23:20:06.633263Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-285481","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.23:2380"],"advertise-client-urls":["https://192.168.39.23:2379"]}
	
	
	==> kernel <==
	 23:27:03 up 17 min,  0 users,  load average: 0.36, 0.38, 0.34
	Linux ha-285481 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [64c336a92304b80333794a90f42e0f6bcd2fa01016d711d2f2e294e64ccb853d] <==
	I0315 23:21:53.327677       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0315 23:22:03.691057       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0315 23:22:13.700856       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0315 23:22:14.702477       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: connection refused
	I0315 23:22:21.676105       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0315 23:22:24.747810       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [7aa0725072636d6e1d7818c35b790393a222160d0b7171ce34fa7ab2e8c3311b] <==
	I0315 23:26:14.682220       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:26:24.699217       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:26:24.699309       1 main.go:227] handling current node
	I0315 23:26:24.699347       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:26:24.699365       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:26:24.699601       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:26:24.699732       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:26:34.707407       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:26:34.707599       1 main.go:227] handling current node
	I0315 23:26:34.707695       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:26:34.707723       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:26:34.707878       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:26:34.707902       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:26:44.723988       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:26:44.724080       1 main.go:227] handling current node
	I0315 23:26:44.724104       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:26:44.724122       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:26:44.724394       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:26:44.724444       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	I0315 23:26:54.739373       1 main.go:223] Handling node with IPs: map[192.168.39.23:{}]
	I0315 23:26:54.739421       1 main.go:227] handling current node
	I0315 23:26:54.739431       1 main.go:223] Handling node with IPs: map[192.168.39.201:{}]
	I0315 23:26:54.739437       1 main.go:250] Node ha-285481-m02 has CIDR [10.244.1.0/24] 
	I0315 23:26:54.739559       1 main.go:223] Handling node with IPs: map[192.168.39.115:{}]
	I0315 23:26:54.739564       1 main.go:250] Node ha-285481-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [2dae1e7bd9a21921f52384abd570c0ba7583f1fee91c609c8b810ce5cffd026a] <==
	I0315 23:22:40.232461       1 controller.go:116] Starting legacy_token_tracking_controller
	I0315 23:22:40.232467       1 shared_informer.go:311] Waiting for caches to sync for configmaps
	I0315 23:22:40.287388       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0315 23:22:40.290762       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 23:22:40.322571       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 23:22:40.354071       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 23:22:40.354299       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 23:22:40.354449       1 aggregator.go:166] initial CRD sync complete...
	I0315 23:22:40.354488       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 23:22:40.354495       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 23:22:40.354501       1 cache.go:39] Caches are synced for autoregister controller
	I0315 23:22:40.419675       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 23:22:40.428146       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 23:22:40.428225       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 23:22:40.428367       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 23:22:40.430256       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 23:22:40.461185       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0315 23:22:40.471253       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.248]
	I0315 23:22:40.472801       1 controller.go:624] quota admission added evaluator for: endpoints
	I0315 23:22:40.489091       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0315 23:22:40.495135       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0315 23:22:41.239538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0315 23:22:41.735789       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.23 192.168.39.248]
	W0315 23:23:01.731149       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.201 192.168.39.23]
	W0315 23:24:41.741836       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.201 192.168.39.23]
	
	
	==> kube-apiserver [931143ffbabb902729573ad1370b464b1e255873bc4340df0d091c399d23748b] <==
	I0315 23:21:53.246963       1 options.go:220] external host was not specified, using 192.168.39.23
	I0315 23:21:53.248394       1 server.go:148] Version: v1.28.4
	I0315 23:21:53.248537       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:21:54.219360       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0315 23:21:54.246387       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0315 23:21:54.247570       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0315 23:21:54.248026       1 instance.go:298] Using reconciler: lease
	W0315 23:22:14.215530       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0315 23:22:14.218367       1 logging.go:59] [core] [Channel #3 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0315 23:22:14.249802       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [803b9cd3104df923ed414621e3b3d556a0f3f2bc16adc3d72fde47e2081f6c3a] <==
	I0315 23:21:53.805373       1 serving.go:348] Generated self-signed cert in-memory
	I0315 23:21:54.490131       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0315 23:21:54.490176       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:21:54.492489       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0315 23:21:54.492710       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 23:21:54.493035       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0315 23:21:54.493271       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0315 23:22:15.257141       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.23:8443/healthz\": dial tcp 192.168.39.23:8443: connect: connection refused"
	
	
	==> kube-controller-manager [bddf2a55a13f457acb9ddb0aa5bbd2fc81f0f4a0fdbaa9a134b89c7cd963c316] <==
	E0315 23:25:32.633035       1 gc_controller.go:153] "Failed to get node" err="node \"ha-285481-m03\" not found" node="ha-285481-m03"
	I0315 23:25:32.647689       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-zptcr"
	I0315 23:25:32.681916       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-zptcr"
	I0315 23:25:32.682034       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/etcd-ha-285481-m03"
	I0315 23:25:32.717586       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/etcd-ha-285481-m03"
	I0315 23:25:32.717729       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-scheduler-ha-285481-m03"
	I0315 23:25:32.752599       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-scheduler-ha-285481-m03"
	I0315 23:25:32.752733       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-d2fjd"
	I0315 23:25:32.787824       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-d2fjd"
	I0315 23:25:32.787868       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-apiserver-ha-285481-m03"
	I0315 23:25:32.824423       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-apiserver-ha-285481-m03"
	I0315 23:25:32.824529       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-vip-ha-285481-m03"
	I0315 23:25:32.869791       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-vip-ha-285481-m03"
	I0315 23:25:32.869837       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-controller-manager-ha-285481-m03"
	I0315 23:25:32.908138       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-controller-manager-ha-285481-m03"
	I0315 23:25:39.643053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.216162ms"
	I0315 23:25:39.643757       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="140.618µs"
	I0315 23:25:39.704771       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.252532ms"
	I0315 23:25:39.706118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.552µs"
	I0315 23:25:39.709112       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.640557ms"
	I0315 23:25:39.709233       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="64.547µs"
	I0315 23:25:43.362469       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-9c44k" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-9c44k"
	I0315 23:25:43.362967       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68-qxtp4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/coredns-5dd5756b68-qxtp4"
	I0315 23:25:43.363105       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-klvd7" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-klvd7"
	I0315 23:25:43.363195       1 event.go:307] "Event occurred" object="kube-system/storage-provisioner" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod kube-system/storage-provisioner"
	
	
	==> kube-proxy [189cc226d08d1ebf89268547a6405b08b23b95e9db97b4e679be1cb47e22c6b3] <==
	E0315 23:22:16.300070       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-285481": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:37.804699       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-285481": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 23:22:37.805219       1 server.go:969] "Can't determine this node's IP, assuming 127.0.0.1; if this is incorrect, please set the --bind-address flag"
	I0315 23:22:37.847399       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:22:37.847431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:22:37.850239       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:22:37.850496       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:22:37.851074       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:22:37.851121       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:22:37.853223       1 config.go:188] "Starting service config controller"
	I0315 23:22:37.853307       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:22:37.853351       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:22:37.853387       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:22:37.854699       1 config.go:315] "Starting node config controller"
	I0315 23:22:37.854738       1 shared_informer.go:311] Waiting for caches to sync for node config
	E0315 23:22:40.875693       1 event_broadcaster.go:274] Unable to write event: 'Post "https://control-plane.minikube.internal:8443/apis/events.k8s.io/v1/namespaces/default/events": dial tcp 192.168.39.254:8443: connect: no route to host' (may retry after sleeping)
	W0315 23:22:40.875886       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:40.876026       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:22:40.876101       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:40.876126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:22:40.876210       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:22:40.876237       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&limit=500&resourceVersion=0": dial tcp 192.168.39.254:8443: connect: no route to host
	I0315 23:22:41.755002       1 shared_informer.go:318] Caches are synced for node config
	I0315 23:22:42.354520       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:22:42.354758       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [e7c7732963470a3a2dc59f759d3af3a482c64c2cab5bd465da4fc2da30ff4bb2] <==
	E0315 23:18:45.867558       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:45.867338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:45.867611       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:52.523430       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:52.523783       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:52.523881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:52.523952       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:18:52.523577       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:18:52.524007       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:02.763863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:02.764086       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:05.835130       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:05.835714       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:05.835879       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:05.835940       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:27.339733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:27.339810       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://control-plane.minikube.internal:8443/apis/discovery.k8s.io/v1/endpointslices?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1727": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:27.339892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:27.339944       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:19:27.339996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:19:27.340106       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:20:01.132525       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:20:01.133376       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)ha-285481&resourceVersion=1719": dial tcp 192.168.39.254:8443: connect: no route to host
	W0315 23:20:01.133268       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	E0315 23:20:01.133586       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=1695": dial tcp 192.168.39.254:8443: connect: no route to host
	
	
	==> kube-scheduler [97929629ecc64bf8abf735e203ab2e97f692a679601de2e15496f7676a9281e7] <==
	W0315 23:22:40.291050       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 23:22:40.291142       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 23:22:40.291220       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 23:22:40.291255       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 23:22:40.291312       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 23:22:40.291320       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 23:22:40.291381       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 23:22:40.291435       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 23:22:40.291587       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 23:22:40.291694       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 23:22:40.291803       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:22:40.291840       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 23:22:40.291977       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 23:22:40.292024       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 23:22:40.292081       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 23:22:40.292110       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 23:22:40.292162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 23:22:40.292179       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 23:22:40.320338       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 23:22:40.320484       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0315 23:23:05.868280       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 23:24:25.665037       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-lrhp9\": pod busybox-5b5d89c9d6-lrhp9 is already assigned to node \"ha-285481-m04\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-lrhp9" node="ha-285481-m04"
	E0315 23:24:25.665484       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 7f8b0bac-1caf-41e9-87dd-1f755099d407(default/busybox-5b5d89c9d6-lrhp9) wasn't assumed so cannot be forgotten"
	E0315 23:24:25.665696       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-lrhp9\": pod busybox-5b5d89c9d6-lrhp9 is already assigned to node \"ha-285481-m04\"" pod="default/busybox-5b5d89c9d6-lrhp9"
	I0315 23:24:25.665785       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-lrhp9" node="ha-285481-m04"
	
	
	==> kube-scheduler [b1799ad1e14d3e76ac9165874547f9a728ba93ed22060da5b079f734133a2938] <==
	W0315 23:19:59.785544       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 23:19:59.785584       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 23:19:59.795895       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:19:59.795965       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0315 23:20:00.130896       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0315 23:20:00.130979       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0315 23:20:00.273877       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 23:20:00.274055       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 23:20:00.403325       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 23:20:00.403487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 23:20:00.510982       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 23:20:00.511059       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 23:20:01.117108       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0315 23:20:01.117209       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 23:20:01.163088       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0315 23:20:01.163189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0315 23:20:01.557286       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0315 23:20:01.557313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0315 23:20:05.373484       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 23:20:05.373736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0315 23:20:06.132930       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0315 23:20:06.132953       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0315 23:20:06.350176       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0315 23:20:06.350303       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0315 23:20:06.350546       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 23:25:30 ha-285481 kubelet[1384]: E0315 23:25:30.366049    1384 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-285481?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.990505    1384 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Service ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.990690    1384 reflector.go:458] object-"kube-system"/"kube-proxy": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.990748    1384 reflector.go:458] object-"kube-system"/"coredns": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: I0315 23:25:31.990890    1384 status_manager.go:853] "Failed to get status for pod" podUID="53d0c1b0-3c5c-443e-a653-9b91407c8792" pod="kube-system/storage-provisioner" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/storage-provisioner\": http2: client connection lost"
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.991103    1384 reflector.go:458] object-"default"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.991264    1384 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.CSIDriver ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.991331    1384 reflector.go:458] object-"kube-system"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.991396    1384 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.Node ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: E0315 23:25:31.991513    1384 controller.go:193] "Failed to update lease" err="Put \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-285481?timeout=10s\": http2: client connection lost"
	Mar 15 23:25:31 ha-285481 kubelet[1384]: I0315 23:25:31.991713    1384 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
	Mar 15 23:25:31 ha-285481 kubelet[1384]: E0315 23:25:31.992400    1384 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ha-285481\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/ha-285481?timeout=10s\": http2: client connection lost"
	Mar 15 23:25:31 ha-285481 kubelet[1384]: E0315 23:25:31.992464    1384 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count"
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.990507    1384 reflector.go:458] pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:31 ha-285481 kubelet[1384]: W0315 23:25:31.993384    1384 reflector.go:458] vendor/k8s.io/client-go/informers/factory.go:150: watch of *v1.RuntimeClass ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding
	Mar 15 23:25:39 ha-285481 kubelet[1384]: E0315 23:25:39.430138    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:25:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:25:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:25:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:25:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:26:39 ha-285481 kubelet[1384]: E0315 23:26:39.420357    1384 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:26:39 ha-285481 kubelet[1384]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:26:39 ha-285481 kubelet[1384]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:26:39 ha-285481 kubelet[1384]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:26:39 ha-285481 kubelet[1384]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 23:27:02.735574  100116 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17991-75602/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-285481 -n ha-285481
helpers_test.go:261: (dbg) Run:  kubectl --context ha-285481 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (141.96s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (309.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-658614
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-658614
E0315 23:43:58.906181   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:44:08.403528   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-658614: exit status 82 (2m2.043669193s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-658614-m03"  ...
	* Stopping node "multinode-658614-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-658614" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-658614 --wait=true -v=8 --alsologtostderr
E0315 23:47:11.447515   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-658614 --wait=true -v=8 --alsologtostderr: (3m4.878372136s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-658614
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-658614 -n multinode-658614
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-658614 logs -n 25: (1.571047958s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2872696795/001/cp-test_multinode-658614-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614:/home/docker/cp-test_multinode-658614-m02_multinode-658614.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614 sudo cat                                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m02_multinode-658614.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03:/home/docker/cp-test_multinode-658614-m02_multinode-658614-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614-m03 sudo cat                                   | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m02_multinode-658614-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp testdata/cp-test.txt                                                | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2872696795/001/cp-test_multinode-658614-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614:/home/docker/cp-test_multinode-658614-m03_multinode-658614.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614 sudo cat                                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m03_multinode-658614.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02:/home/docker/cp-test_multinode-658614-m03_multinode-658614-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614-m02 sudo cat                                   | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m03_multinode-658614-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-658614 node stop m03                                                          | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	| node    | multinode-658614 node start                                                             | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-658614                                                                | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:42 UTC |                     |
	| stop    | -p multinode-658614                                                                     | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:42 UTC |                     |
	| start   | -p multinode-658614                                                                     | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:44 UTC | 15 Mar 24 23:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-658614                                                                | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 23:44:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 23:44:11.844237  108344 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:44:11.844368  108344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:44:11.844376  108344 out.go:304] Setting ErrFile to fd 2...
	I0315 23:44:11.844383  108344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:44:11.844617  108344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:44:11.845183  108344 out.go:298] Setting JSON to false
	I0315 23:44:11.846116  108344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8802,"bootTime":1710537450,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:44:11.846184  108344 start.go:139] virtualization: kvm guest
	I0315 23:44:11.848745  108344 out.go:177] * [multinode-658614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:44:11.850497  108344 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:44:11.850476  108344 notify.go:220] Checking for updates...
	I0315 23:44:11.851794  108344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:44:11.853104  108344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:44:11.854544  108344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:44:11.855874  108344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:44:11.857246  108344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:44:11.859387  108344 config.go:182] Loaded profile config "multinode-658614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:44:11.859544  108344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:44:11.860199  108344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:44:11.860280  108344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:44:11.877159  108344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I0315 23:44:11.877539  108344 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:44:11.878077  108344 main.go:141] libmachine: Using API Version  1
	I0315 23:44:11.878100  108344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:44:11.878468  108344 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:44:11.878645  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:44:11.913466  108344 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 23:44:11.914852  108344 start.go:297] selected driver: kvm2
	I0315 23:44:11.914870  108344 start.go:901] validating driver "kvm2" against &{Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:44:11.915047  108344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:44:11.915452  108344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:44:11.915537  108344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:44:11.930661  108344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:44:11.931782  108344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:44:11.931903  108344 cni.go:84] Creating CNI manager for ""
	I0315 23:44:11.931926  108344 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 23:44:11.932043  108344 start.go:340] cluster config:
	{Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-658614 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:44:11.932334  108344 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:44:11.935256  108344 out.go:177] * Starting "multinode-658614" primary control-plane node in "multinode-658614" cluster
	I0315 23:44:11.936587  108344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:44:11.936638  108344 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 23:44:11.936652  108344 cache.go:56] Caching tarball of preloaded images
	I0315 23:44:11.936726  108344 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:44:11.936741  108344 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:44:11.936867  108344 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/config.json ...
	I0315 23:44:11.937077  108344 start.go:360] acquireMachinesLock for multinode-658614: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:44:11.937124  108344 start.go:364] duration metric: took 27.635µs to acquireMachinesLock for "multinode-658614"
	I0315 23:44:11.937144  108344 start.go:96] Skipping create...Using existing machine configuration
	I0315 23:44:11.937153  108344 fix.go:54] fixHost starting: 
	I0315 23:44:11.937450  108344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:44:11.937477  108344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:44:11.952051  108344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0315 23:44:11.952528  108344 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:44:11.953022  108344 main.go:141] libmachine: Using API Version  1
	I0315 23:44:11.953046  108344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:44:11.953365  108344 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:44:11.953534  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:44:11.953691  108344 main.go:141] libmachine: (multinode-658614) Calling .GetState
	I0315 23:44:11.955421  108344 fix.go:112] recreateIfNeeded on multinode-658614: state=Running err=<nil>
	W0315 23:44:11.955441  108344 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 23:44:11.958537  108344 out.go:177] * Updating the running kvm2 "multinode-658614" VM ...
	I0315 23:44:11.959824  108344 machine.go:94] provisionDockerMachine start ...
	I0315 23:44:11.959849  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:44:11.960054  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:11.962482  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:11.962941  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:11.962967  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:11.963108  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:11.963275  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:11.963434  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:11.963570  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:11.963722  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:11.963895  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:11.963905  108344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 23:44:12.085675  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-658614
	
	I0315 23:44:12.085701  108344 main.go:141] libmachine: (multinode-658614) Calling .GetMachineName
	I0315 23:44:12.085945  108344 buildroot.go:166] provisioning hostname "multinode-658614"
	I0315 23:44:12.085971  108344 main.go:141] libmachine: (multinode-658614) Calling .GetMachineName
	I0315 23:44:12.086139  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.089166  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.089532  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.089561  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.089740  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.089914  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.090138  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.090294  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.090475  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:12.090670  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:12.090695  108344 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-658614 && echo "multinode-658614" | sudo tee /etc/hostname
	I0315 23:44:12.224949  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-658614
	
	I0315 23:44:12.224986  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.227810  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.228189  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.228235  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.228359  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.228569  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.228714  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.228862  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.229027  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:12.229232  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:12.229249  108344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-658614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-658614/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-658614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:44:12.349223  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:44:12.349256  108344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:44:12.349283  108344 buildroot.go:174] setting up certificates
	I0315 23:44:12.349297  108344 provision.go:84] configureAuth start
	I0315 23:44:12.349314  108344 main.go:141] libmachine: (multinode-658614) Calling .GetMachineName
	I0315 23:44:12.349605  108344 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:44:12.352433  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.352762  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.352793  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.352926  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.355062  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.355386  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.355415  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.355683  108344 provision.go:143] copyHostCerts
	I0315 23:44:12.355730  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:44:12.355766  108344 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:44:12.355776  108344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:44:12.355841  108344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:44:12.355926  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:44:12.355948  108344 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:44:12.355957  108344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:44:12.355995  108344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:44:12.356075  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:44:12.356101  108344 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:44:12.356111  108344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:44:12.356143  108344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:44:12.356198  108344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.multinode-658614 san=[127.0.0.1 192.168.39.5 localhost minikube multinode-658614]
	I0315 23:44:12.448319  108344 provision.go:177] copyRemoteCerts
	I0315 23:44:12.448408  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:44:12.448440  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.451093  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.451483  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.451520  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.451717  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.451923  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.452137  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.452291  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:44:12.541199  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:44:12.541285  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:44:12.573311  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:44:12.573390  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0315 23:44:12.605134  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:44:12.605202  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 23:44:12.632715  108344 provision.go:87] duration metric: took 283.401081ms to configureAuth
	I0315 23:44:12.632751  108344 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:44:12.632990  108344 config.go:182] Loaded profile config "multinode-658614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:44:12.633077  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.635557  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.635850  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.635879  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.636042  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.636243  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.636392  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.636544  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.636716  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:12.636920  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:12.636937  108344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:45:43.525961  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:45:43.525996  108344 machine.go:97] duration metric: took 1m31.566153193s to provisionDockerMachine
	I0315 23:45:43.526011  108344 start.go:293] postStartSetup for "multinode-658614" (driver="kvm2")
	I0315 23:45:43.526023  108344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:45:43.526047  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.526427  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:45:43.526472  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.529825  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.530289  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.530310  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.530483  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.530681  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.530841  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.530980  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:45:43.620071  108344 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:45:43.624213  108344 command_runner.go:130] > NAME=Buildroot
	I0315 23:45:43.624230  108344 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0315 23:45:43.624234  108344 command_runner.go:130] > ID=buildroot
	I0315 23:45:43.624239  108344 command_runner.go:130] > VERSION_ID=2023.02.9
	I0315 23:45:43.624245  108344 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0315 23:45:43.624402  108344 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:45:43.624423  108344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:45:43.624500  108344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:45:43.624605  108344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:45:43.624619  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:45:43.624703  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:45:43.634722  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:45:43.660452  108344 start.go:296] duration metric: took 134.424394ms for postStartSetup
	I0315 23:45:43.660495  108344 fix.go:56] duration metric: took 1m31.723341545s for fixHost
	I0315 23:45:43.660520  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.663251  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.663645  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.663676  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.663813  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.664028  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.664217  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.664340  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.664534  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:45:43.664700  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:45:43.664711  108344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:45:43.780435  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710546343.752985565
	
	I0315 23:45:43.780465  108344 fix.go:216] guest clock: 1710546343.752985565
	I0315 23:45:43.780475  108344 fix.go:229] Guest: 2024-03-15 23:45:43.752985565 +0000 UTC Remote: 2024-03-15 23:45:43.660500222 +0000 UTC m=+91.866010704 (delta=92.485343ms)
	I0315 23:45:43.780502  108344 fix.go:200] guest clock delta is within tolerance: 92.485343ms
	I0315 23:45:43.780508  108344 start.go:83] releasing machines lock for "multinode-658614", held for 1m31.843371453s
	I0315 23:45:43.780529  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.780786  108344 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:45:43.783656  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.784058  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.784113  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.784232  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.784750  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.784967  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.785062  108344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:45:43.785117  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.785190  108344 ssh_runner.go:195] Run: cat /version.json
	I0315 23:45:43.785213  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.787678  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.787896  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.788040  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.788066  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.788227  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.788242  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.788254  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.788383  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.788461  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.788592  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.788594  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.788755  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.788763  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:45:43.788875  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:45:43.868092  108344 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0315 23:45:43.868454  108344 ssh_runner.go:195] Run: systemctl --version
	I0315 23:45:43.893211  108344 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0315 23:45:43.893254  108344 command_runner.go:130] > systemd 252 (252)
	I0315 23:45:43.893273  108344 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0315 23:45:43.893323  108344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:45:44.055930  108344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 23:45:44.072544  108344 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0315 23:45:44.072652  108344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:45:44.072723  108344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:45:44.082252  108344 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 23:45:44.082279  108344 start.go:494] detecting cgroup driver to use...
	I0315 23:45:44.082345  108344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:45:44.099124  108344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:45:44.113500  108344 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:45:44.113579  108344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:45:44.127840  108344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:45:44.141627  108344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:45:44.289795  108344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:45:44.430265  108344 docker.go:233] disabling docker service ...
	I0315 23:45:44.430342  108344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:45:44.447126  108344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:45:44.461276  108344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:45:44.602410  108344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:45:44.747722  108344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:45:44.761943  108344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:45:44.781948  108344 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0315 23:45:44.782518  108344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:45:44.782571  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.793215  108344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:45:44.793276  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.803469  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.813764  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.824092  108344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:45:44.834450  108344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:45:44.843611  108344 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0315 23:45:44.843662  108344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:45:44.854110  108344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:45:45.004259  108344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:45:50.574154  108344 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.569842268s)
	I0315 23:45:50.574191  108344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:45:50.574251  108344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:45:50.579391  108344 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0315 23:45:50.579427  108344 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0315 23:45:50.579437  108344 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0315 23:45:50.579447  108344 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 23:45:50.579454  108344 command_runner.go:130] > Access: 2024-03-15 23:45:50.430659656 +0000
	I0315 23:45:50.579463  108344 command_runner.go:130] > Modify: 2024-03-15 23:45:50.430659656 +0000
	I0315 23:45:50.579473  108344 command_runner.go:130] > Change: 2024-03-15 23:45:50.430659656 +0000
	I0315 23:45:50.579478  108344 command_runner.go:130] >  Birth: -
	I0315 23:45:50.579571  108344 start.go:562] Will wait 60s for crictl version
	I0315 23:45:50.579622  108344 ssh_runner.go:195] Run: which crictl
	I0315 23:45:50.583233  108344 command_runner.go:130] > /usr/bin/crictl
	I0315 23:45:50.583417  108344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:45:50.620256  108344 command_runner.go:130] > Version:  0.1.0
	I0315 23:45:50.620290  108344 command_runner.go:130] > RuntimeName:  cri-o
	I0315 23:45:50.620297  108344 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0315 23:45:50.620305  108344 command_runner.go:130] > RuntimeApiVersion:  v1
	I0315 23:45:50.621581  108344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:45:50.621664  108344 ssh_runner.go:195] Run: crio --version
	I0315 23:45:50.651948  108344 command_runner.go:130] > crio version 1.29.1
	I0315 23:45:50.651971  108344 command_runner.go:130] > Version:        1.29.1
	I0315 23:45:50.651978  108344 command_runner.go:130] > GitCommit:      unknown
	I0315 23:45:50.651985  108344 command_runner.go:130] > GitCommitDate:  unknown
	I0315 23:45:50.651992  108344 command_runner.go:130] > GitTreeState:   clean
	I0315 23:45:50.652001  108344 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0315 23:45:50.652007  108344 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 23:45:50.652015  108344 command_runner.go:130] > Compiler:       gc
	I0315 23:45:50.652023  108344 command_runner.go:130] > Platform:       linux/amd64
	I0315 23:45:50.652035  108344 command_runner.go:130] > Linkmode:       dynamic
	I0315 23:45:50.652042  108344 command_runner.go:130] > BuildTags:      
	I0315 23:45:50.652048  108344 command_runner.go:130] >   containers_image_ostree_stub
	I0315 23:45:50.652054  108344 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 23:45:50.652059  108344 command_runner.go:130] >   btrfs_noversion
	I0315 23:45:50.652065  108344 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 23:45:50.652076  108344 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 23:45:50.652081  108344 command_runner.go:130] >   seccomp
	I0315 23:45:50.652086  108344 command_runner.go:130] > LDFlags:          unknown
	I0315 23:45:50.652093  108344 command_runner.go:130] > SeccompEnabled:   true
	I0315 23:45:50.652100  108344 command_runner.go:130] > AppArmorEnabled:  false
	I0315 23:45:50.652177  108344 ssh_runner.go:195] Run: crio --version
	I0315 23:45:50.680856  108344 command_runner.go:130] > crio version 1.29.1
	I0315 23:45:50.680879  108344 command_runner.go:130] > Version:        1.29.1
	I0315 23:45:50.680885  108344 command_runner.go:130] > GitCommit:      unknown
	I0315 23:45:50.680896  108344 command_runner.go:130] > GitCommitDate:  unknown
	I0315 23:45:50.680900  108344 command_runner.go:130] > GitTreeState:   clean
	I0315 23:45:50.680905  108344 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0315 23:45:50.680910  108344 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 23:45:50.680913  108344 command_runner.go:130] > Compiler:       gc
	I0315 23:45:50.680918  108344 command_runner.go:130] > Platform:       linux/amd64
	I0315 23:45:50.680921  108344 command_runner.go:130] > Linkmode:       dynamic
	I0315 23:45:50.680927  108344 command_runner.go:130] > BuildTags:      
	I0315 23:45:50.680931  108344 command_runner.go:130] >   containers_image_ostree_stub
	I0315 23:45:50.680936  108344 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 23:45:50.680940  108344 command_runner.go:130] >   btrfs_noversion
	I0315 23:45:50.680944  108344 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 23:45:50.680950  108344 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 23:45:50.680955  108344 command_runner.go:130] >   seccomp
	I0315 23:45:50.680959  108344 command_runner.go:130] > LDFlags:          unknown
	I0315 23:45:50.680963  108344 command_runner.go:130] > SeccompEnabled:   true
	I0315 23:45:50.680968  108344 command_runner.go:130] > AppArmorEnabled:  false
	I0315 23:45:50.684271  108344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:45:50.685746  108344 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:45:50.688406  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:50.688680  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:50.688716  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:50.688912  108344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:45:50.693409  108344 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0315 23:45:50.693562  108344 kubeadm.go:877] updating cluster {Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 23:45:50.693750  108344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:45:50.693813  108344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:45:50.751426  108344 command_runner.go:130] > {
	I0315 23:45:50.751453  108344 command_runner.go:130] >   "images": [
	I0315 23:45:50.751458  108344 command_runner.go:130] >     {
	I0315 23:45:50.751470  108344 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 23:45:50.751475  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751481  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 23:45:50.751485  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751489  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751497  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 23:45:50.751504  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 23:45:50.751510  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751514  108344 command_runner.go:130] >       "size": "65258016",
	I0315 23:45:50.751521  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751526  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751536  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751542  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751546  108344 command_runner.go:130] >     },
	I0315 23:45:50.751552  108344 command_runner.go:130] >     {
	I0315 23:45:50.751561  108344 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 23:45:50.751574  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751587  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 23:45:50.751591  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751607  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751615  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 23:45:50.751622  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 23:45:50.751629  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751633  108344 command_runner.go:130] >       "size": "65291810",
	I0315 23:45:50.751636  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751644  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751650  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751653  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751659  108344 command_runner.go:130] >     },
	I0315 23:45:50.751663  108344 command_runner.go:130] >     {
	I0315 23:45:50.751670  108344 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 23:45:50.751676  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751682  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 23:45:50.751688  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751692  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751700  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 23:45:50.751708  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 23:45:50.751714  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751718  108344 command_runner.go:130] >       "size": "1363676",
	I0315 23:45:50.751724  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751728  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751734  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751739  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751744  108344 command_runner.go:130] >     },
	I0315 23:45:50.751748  108344 command_runner.go:130] >     {
	I0315 23:45:50.751756  108344 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 23:45:50.751760  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751765  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 23:45:50.751771  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751775  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751785  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 23:45:50.751800  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 23:45:50.751806  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751811  108344 command_runner.go:130] >       "size": "31470524",
	I0315 23:45:50.751817  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751826  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751832  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751836  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751841  108344 command_runner.go:130] >     },
	I0315 23:45:50.751845  108344 command_runner.go:130] >     {
	I0315 23:45:50.751853  108344 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 23:45:50.751859  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751865  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 23:45:50.751870  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751874  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751883  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 23:45:50.751892  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 23:45:50.751896  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751900  108344 command_runner.go:130] >       "size": "53621675",
	I0315 23:45:50.751903  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751907  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751913  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751917  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751921  108344 command_runner.go:130] >     },
	I0315 23:45:50.751924  108344 command_runner.go:130] >     {
	I0315 23:45:50.751932  108344 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 23:45:50.751936  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751941  108344 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 23:45:50.751947  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751951  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751959  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 23:45:50.751968  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 23:45:50.751974  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751978  108344 command_runner.go:130] >       "size": "295456551",
	I0315 23:45:50.751983  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.751987  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.751993  108344 command_runner.go:130] >       },
	I0315 23:45:50.751997  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752003  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752007  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752012  108344 command_runner.go:130] >     },
	I0315 23:45:50.752023  108344 command_runner.go:130] >     {
	I0315 23:45:50.752031  108344 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 23:45:50.752035  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752042  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 23:45:50.752046  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752053  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752060  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 23:45:50.752069  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 23:45:50.752074  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752078  108344 command_runner.go:130] >       "size": "127226832",
	I0315 23:45:50.752084  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752093  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.752100  108344 command_runner.go:130] >       },
	I0315 23:45:50.752103  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752109  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752113  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752117  108344 command_runner.go:130] >     },
	I0315 23:45:50.752120  108344 command_runner.go:130] >     {
	I0315 23:45:50.752129  108344 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 23:45:50.752133  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752139  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 23:45:50.752144  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752149  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752170  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 23:45:50.752180  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 23:45:50.752186  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752190  108344 command_runner.go:130] >       "size": "123261750",
	I0315 23:45:50.752196  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752200  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.752205  108344 command_runner.go:130] >       },
	I0315 23:45:50.752209  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752215  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752219  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752225  108344 command_runner.go:130] >     },
	I0315 23:45:50.752229  108344 command_runner.go:130] >     {
	I0315 23:45:50.752237  108344 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 23:45:50.752246  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752254  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 23:45:50.752258  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752262  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752271  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 23:45:50.752277  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 23:45:50.752281  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752284  108344 command_runner.go:130] >       "size": "74749335",
	I0315 23:45:50.752288  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.752291  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752295  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752298  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752301  108344 command_runner.go:130] >     },
	I0315 23:45:50.752307  108344 command_runner.go:130] >     {
	I0315 23:45:50.752313  108344 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 23:45:50.752319  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752324  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 23:45:50.752330  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752334  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752343  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 23:45:50.752352  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 23:45:50.752358  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752362  108344 command_runner.go:130] >       "size": "61551410",
	I0315 23:45:50.752365  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752371  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.752374  108344 command_runner.go:130] >       },
	I0315 23:45:50.752380  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752384  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752388  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752391  108344 command_runner.go:130] >     },
	I0315 23:45:50.752394  108344 command_runner.go:130] >     {
	I0315 23:45:50.752403  108344 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 23:45:50.752407  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752413  108344 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 23:45:50.752417  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752421  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752435  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 23:45:50.752445  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 23:45:50.752448  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752452  108344 command_runner.go:130] >       "size": "750414",
	I0315 23:45:50.752456  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752460  108344 command_runner.go:130] >         "value": "65535"
	I0315 23:45:50.752465  108344 command_runner.go:130] >       },
	I0315 23:45:50.752469  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752475  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752479  108344 command_runner.go:130] >       "pinned": true
	I0315 23:45:50.752482  108344 command_runner.go:130] >     }
	I0315 23:45:50.752486  108344 command_runner.go:130] >   ]
	I0315 23:45:50.752491  108344 command_runner.go:130] > }
	I0315 23:45:50.752795  108344 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:45:50.752812  108344 crio.go:415] Images already preloaded, skipping extraction
	I0315 23:45:50.752863  108344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:45:50.792433  108344 command_runner.go:130] > {
	I0315 23:45:50.792459  108344 command_runner.go:130] >   "images": [
	I0315 23:45:50.792463  108344 command_runner.go:130] >     {
	I0315 23:45:50.792471  108344 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 23:45:50.792476  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792481  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 23:45:50.792485  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792488  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792497  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 23:45:50.792503  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 23:45:50.792508  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792513  108344 command_runner.go:130] >       "size": "65258016",
	I0315 23:45:50.792516  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792520  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792531  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792541  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792544  108344 command_runner.go:130] >     },
	I0315 23:45:50.792548  108344 command_runner.go:130] >     {
	I0315 23:45:50.792554  108344 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 23:45:50.792560  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792565  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 23:45:50.792569  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792573  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792583  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 23:45:50.792590  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 23:45:50.792596  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792599  108344 command_runner.go:130] >       "size": "65291810",
	I0315 23:45:50.792606  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792613  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792617  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792621  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792630  108344 command_runner.go:130] >     },
	I0315 23:45:50.792634  108344 command_runner.go:130] >     {
	I0315 23:45:50.792640  108344 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 23:45:50.792645  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792651  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 23:45:50.792657  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792661  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792668  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 23:45:50.792677  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 23:45:50.792683  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792687  108344 command_runner.go:130] >       "size": "1363676",
	I0315 23:45:50.792693  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792697  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792715  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792721  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792724  108344 command_runner.go:130] >     },
	I0315 23:45:50.792730  108344 command_runner.go:130] >     {
	I0315 23:45:50.792736  108344 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 23:45:50.792742  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792749  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 23:45:50.792755  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792759  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792768  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 23:45:50.792782  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 23:45:50.792788  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792793  108344 command_runner.go:130] >       "size": "31470524",
	I0315 23:45:50.792799  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792803  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792809  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792813  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792818  108344 command_runner.go:130] >     },
	I0315 23:45:50.792822  108344 command_runner.go:130] >     {
	I0315 23:45:50.792829  108344 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 23:45:50.792834  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792839  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 23:45:50.792845  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792849  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792856  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 23:45:50.792866  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 23:45:50.792871  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792876  108344 command_runner.go:130] >       "size": "53621675",
	I0315 23:45:50.792881  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792886  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792892  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792896  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792902  108344 command_runner.go:130] >     },
	I0315 23:45:50.792908  108344 command_runner.go:130] >     {
	I0315 23:45:50.792916  108344 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 23:45:50.792920  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792925  108344 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 23:45:50.792930  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792935  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792944  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 23:45:50.792954  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 23:45:50.792961  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792965  108344 command_runner.go:130] >       "size": "295456551",
	I0315 23:45:50.792971  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.792975  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.792984  108344 command_runner.go:130] >       },
	I0315 23:45:50.792990  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792994  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793000  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793004  108344 command_runner.go:130] >     },
	I0315 23:45:50.793007  108344 command_runner.go:130] >     {
	I0315 23:45:50.793013  108344 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 23:45:50.793019  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793024  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 23:45:50.793030  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793034  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793043  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 23:45:50.793053  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 23:45:50.793056  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793063  108344 command_runner.go:130] >       "size": "127226832",
	I0315 23:45:50.793067  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793074  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.793077  108344 command_runner.go:130] >       },
	I0315 23:45:50.793084  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793088  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793094  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793103  108344 command_runner.go:130] >     },
	I0315 23:45:50.793109  108344 command_runner.go:130] >     {
	I0315 23:45:50.793115  108344 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 23:45:50.793121  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793127  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 23:45:50.793132  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793137  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793155  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 23:45:50.793165  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 23:45:50.793171  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793176  108344 command_runner.go:130] >       "size": "123261750",
	I0315 23:45:50.793182  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793187  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.793193  108344 command_runner.go:130] >       },
	I0315 23:45:50.793196  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793202  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793206  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793212  108344 command_runner.go:130] >     },
	I0315 23:45:50.793216  108344 command_runner.go:130] >     {
	I0315 23:45:50.793224  108344 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 23:45:50.793231  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793235  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 23:45:50.793242  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793246  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793255  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 23:45:50.793264  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 23:45:50.793272  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793277  108344 command_runner.go:130] >       "size": "74749335",
	I0315 23:45:50.793281  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.793287  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793291  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793297  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793301  108344 command_runner.go:130] >     },
	I0315 23:45:50.793306  108344 command_runner.go:130] >     {
	I0315 23:45:50.793312  108344 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 23:45:50.793318  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793323  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 23:45:50.793328  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793333  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793342  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 23:45:50.793351  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 23:45:50.793357  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793361  108344 command_runner.go:130] >       "size": "61551410",
	I0315 23:45:50.793366  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793370  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.793376  108344 command_runner.go:130] >       },
	I0315 23:45:50.793380  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793386  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793391  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793396  108344 command_runner.go:130] >     },
	I0315 23:45:50.793400  108344 command_runner.go:130] >     {
	I0315 23:45:50.793409  108344 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 23:45:50.793413  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793418  108344 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 23:45:50.793421  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793428  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793434  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 23:45:50.793442  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 23:45:50.793448  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793452  108344 command_runner.go:130] >       "size": "750414",
	I0315 23:45:50.793458  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793462  108344 command_runner.go:130] >         "value": "65535"
	I0315 23:45:50.793467  108344 command_runner.go:130] >       },
	I0315 23:45:50.793471  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793478  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793482  108344 command_runner.go:130] >       "pinned": true
	I0315 23:45:50.793488  108344 command_runner.go:130] >     }
	I0315 23:45:50.793491  108344 command_runner.go:130] >   ]
	I0315 23:45:50.793497  108344 command_runner.go:130] > }
	I0315 23:45:50.793978  108344 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:45:50.793998  108344 cache_images.go:84] Images are preloaded, skipping loading
	I0315 23:45:50.794006  108344 kubeadm.go:928] updating node { 192.168.39.5 8443 v1.28.4 crio true true} ...
	I0315 23:45:50.794110  108344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-658614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:45:50.794175  108344 ssh_runner.go:195] Run: crio config
	I0315 23:45:50.836222  108344 command_runner.go:130] ! time="2024-03-15 23:45:50.808508453Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0315 23:45:50.844252  108344 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0315 23:45:50.850007  108344 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0315 23:45:50.850027  108344 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0315 23:45:50.850033  108344 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0315 23:45:50.850037  108344 command_runner.go:130] > #
	I0315 23:45:50.850043  108344 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0315 23:45:50.850049  108344 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0315 23:45:50.850056  108344 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0315 23:45:50.850064  108344 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0315 23:45:50.850073  108344 command_runner.go:130] > # reload'.
	I0315 23:45:50.850079  108344 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0315 23:45:50.850085  108344 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0315 23:45:50.850091  108344 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0315 23:45:50.850105  108344 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0315 23:45:50.850111  108344 command_runner.go:130] > [crio]
	I0315 23:45:50.850120  108344 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0315 23:45:50.850130  108344 command_runner.go:130] > # containers images, in this directory.
	I0315 23:45:50.850137  108344 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0315 23:45:50.850151  108344 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0315 23:45:50.850160  108344 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0315 23:45:50.850175  108344 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0315 23:45:50.850185  108344 command_runner.go:130] > # imagestore = ""
	I0315 23:45:50.850195  108344 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0315 23:45:50.850205  108344 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0315 23:45:50.850210  108344 command_runner.go:130] > storage_driver = "overlay"
	I0315 23:45:50.850219  108344 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0315 23:45:50.850225  108344 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0315 23:45:50.850237  108344 command_runner.go:130] > storage_option = [
	I0315 23:45:50.850244  108344 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0315 23:45:50.850247  108344 command_runner.go:130] > ]
	I0315 23:45:50.850258  108344 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0315 23:45:50.850266  108344 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0315 23:45:50.850273  108344 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0315 23:45:50.850278  108344 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0315 23:45:50.850286  108344 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0315 23:45:50.850291  108344 command_runner.go:130] > # always happen on a node reboot
	I0315 23:45:50.850298  108344 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0315 23:45:50.850321  108344 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0315 23:45:50.850330  108344 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0315 23:45:50.850335  108344 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0315 23:45:50.850340  108344 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0315 23:45:50.850347  108344 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0315 23:45:50.850357  108344 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0315 23:45:50.850363  108344 command_runner.go:130] > # internal_wipe = true
	I0315 23:45:50.850371  108344 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0315 23:45:50.850378  108344 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0315 23:45:50.850383  108344 command_runner.go:130] > # internal_repair = false
	I0315 23:45:50.850390  108344 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0315 23:45:50.850396  108344 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0315 23:45:50.850404  108344 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0315 23:45:50.850413  108344 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0315 23:45:50.850422  108344 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0315 23:45:50.850428  108344 command_runner.go:130] > [crio.api]
	I0315 23:45:50.850434  108344 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0315 23:45:50.850440  108344 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0315 23:45:50.850445  108344 command_runner.go:130] > # IP address on which the stream server will listen.
	I0315 23:45:50.850452  108344 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0315 23:45:50.850459  108344 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0315 23:45:50.850467  108344 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0315 23:45:50.850471  108344 command_runner.go:130] > # stream_port = "0"
	I0315 23:45:50.850478  108344 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0315 23:45:50.850485  108344 command_runner.go:130] > # stream_enable_tls = false
	I0315 23:45:50.850491  108344 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0315 23:45:50.850497  108344 command_runner.go:130] > # stream_idle_timeout = ""
	I0315 23:45:50.850504  108344 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0315 23:45:50.850514  108344 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0315 23:45:50.850520  108344 command_runner.go:130] > # minutes.
	I0315 23:45:50.850524  108344 command_runner.go:130] > # stream_tls_cert = ""
	I0315 23:45:50.850531  108344 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0315 23:45:50.850540  108344 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0315 23:45:50.850546  108344 command_runner.go:130] > # stream_tls_key = ""
	I0315 23:45:50.850552  108344 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0315 23:45:50.850560  108344 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0315 23:45:50.850574  108344 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0315 23:45:50.850580  108344 command_runner.go:130] > # stream_tls_ca = ""
	I0315 23:45:50.850588  108344 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 23:45:50.850594  108344 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0315 23:45:50.850602  108344 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 23:45:50.850609  108344 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0315 23:45:50.850615  108344 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0315 23:45:50.850622  108344 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0315 23:45:50.850629  108344 command_runner.go:130] > [crio.runtime]
	I0315 23:45:50.850634  108344 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0315 23:45:50.850642  108344 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0315 23:45:50.850648  108344 command_runner.go:130] > # "nofile=1024:2048"
	I0315 23:45:50.850654  108344 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0315 23:45:50.850660  108344 command_runner.go:130] > # default_ulimits = [
	I0315 23:45:50.850664  108344 command_runner.go:130] > # ]
	I0315 23:45:50.850671  108344 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0315 23:45:50.850677  108344 command_runner.go:130] > # no_pivot = false
	I0315 23:45:50.850683  108344 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0315 23:45:50.850691  108344 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0315 23:45:50.850698  108344 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0315 23:45:50.850704  108344 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0315 23:45:50.850711  108344 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0315 23:45:50.850717  108344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 23:45:50.850723  108344 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0315 23:45:50.850728  108344 command_runner.go:130] > # Cgroup setting for conmon
	I0315 23:45:50.850736  108344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0315 23:45:50.850741  108344 command_runner.go:130] > conmon_cgroup = "pod"
	I0315 23:45:50.850747  108344 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0315 23:45:50.850754  108344 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0315 23:45:50.850763  108344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 23:45:50.850768  108344 command_runner.go:130] > conmon_env = [
	I0315 23:45:50.850775  108344 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 23:45:50.850781  108344 command_runner.go:130] > ]
	I0315 23:45:50.850786  108344 command_runner.go:130] > # Additional environment variables to set for all the
	I0315 23:45:50.850793  108344 command_runner.go:130] > # containers. These are overridden if set in the
	I0315 23:45:50.850798  108344 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0315 23:45:50.850804  108344 command_runner.go:130] > # default_env = [
	I0315 23:45:50.850808  108344 command_runner.go:130] > # ]
	I0315 23:45:50.850814  108344 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0315 23:45:50.850824  108344 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0315 23:45:50.850829  108344 command_runner.go:130] > # selinux = false
	I0315 23:45:50.850835  108344 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0315 23:45:50.850844  108344 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0315 23:45:50.850851  108344 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0315 23:45:50.850856  108344 command_runner.go:130] > # seccomp_profile = ""
	I0315 23:45:50.850861  108344 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0315 23:45:50.850869  108344 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0315 23:45:50.850877  108344 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0315 23:45:50.850884  108344 command_runner.go:130] > # which might increase security.
	I0315 23:45:50.850889  108344 command_runner.go:130] > # This option is currently deprecated,
	I0315 23:45:50.850896  108344 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0315 23:45:50.850901  108344 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0315 23:45:50.850909  108344 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0315 23:45:50.850917  108344 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0315 23:45:50.850923  108344 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0315 23:45:50.850931  108344 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0315 23:45:50.850938  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.850943  108344 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0315 23:45:50.850950  108344 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0315 23:45:50.850954  108344 command_runner.go:130] > # the cgroup blockio controller.
	I0315 23:45:50.850960  108344 command_runner.go:130] > # blockio_config_file = ""
	I0315 23:45:50.850966  108344 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0315 23:45:50.850973  108344 command_runner.go:130] > # blockio parameters.
	I0315 23:45:50.850977  108344 command_runner.go:130] > # blockio_reload = false
	I0315 23:45:50.850986  108344 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0315 23:45:50.850992  108344 command_runner.go:130] > # irqbalance daemon.
	I0315 23:45:50.850997  108344 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0315 23:45:50.851009  108344 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0315 23:45:50.851015  108344 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0315 23:45:50.851024  108344 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0315 23:45:50.851032  108344 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0315 23:45:50.851039  108344 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0315 23:45:50.851046  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.851050  108344 command_runner.go:130] > # rdt_config_file = ""
	I0315 23:45:50.851057  108344 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0315 23:45:50.851061  108344 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0315 23:45:50.851080  108344 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0315 23:45:50.851086  108344 command_runner.go:130] > # separate_pull_cgroup = ""
	I0315 23:45:50.851092  108344 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0315 23:45:50.851105  108344 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0315 23:45:50.851111  108344 command_runner.go:130] > # will be added.
	I0315 23:45:50.851115  108344 command_runner.go:130] > # default_capabilities = [
	I0315 23:45:50.851120  108344 command_runner.go:130] > # 	"CHOWN",
	I0315 23:45:50.851124  108344 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0315 23:45:50.851130  108344 command_runner.go:130] > # 	"FSETID",
	I0315 23:45:50.851134  108344 command_runner.go:130] > # 	"FOWNER",
	I0315 23:45:50.851140  108344 command_runner.go:130] > # 	"SETGID",
	I0315 23:45:50.851143  108344 command_runner.go:130] > # 	"SETUID",
	I0315 23:45:50.851149  108344 command_runner.go:130] > # 	"SETPCAP",
	I0315 23:45:50.851153  108344 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0315 23:45:50.851160  108344 command_runner.go:130] > # 	"KILL",
	I0315 23:45:50.851163  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851170  108344 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0315 23:45:50.851179  108344 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0315 23:45:50.851185  108344 command_runner.go:130] > # add_inheritable_capabilities = false
	I0315 23:45:50.851192  108344 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0315 23:45:50.851199  108344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 23:45:50.851205  108344 command_runner.go:130] > # default_sysctls = [
	I0315 23:45:50.851208  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851213  108344 command_runner.go:130] > # List of devices on the host that a
	I0315 23:45:50.851220  108344 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0315 23:45:50.851227  108344 command_runner.go:130] > # allowed_devices = [
	I0315 23:45:50.851231  108344 command_runner.go:130] > # 	"/dev/fuse",
	I0315 23:45:50.851238  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851243  108344 command_runner.go:130] > # List of additional devices. specified as
	I0315 23:45:50.851252  108344 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0315 23:45:50.851259  108344 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0315 23:45:50.851265  108344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 23:45:50.851273  108344 command_runner.go:130] > # additional_devices = [
	I0315 23:45:50.851279  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851284  108344 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0315 23:45:50.851290  108344 command_runner.go:130] > # cdi_spec_dirs = [
	I0315 23:45:50.851294  108344 command_runner.go:130] > # 	"/etc/cdi",
	I0315 23:45:50.851297  108344 command_runner.go:130] > # 	"/var/run/cdi",
	I0315 23:45:50.851302  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851308  108344 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0315 23:45:50.851327  108344 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0315 23:45:50.851337  108344 command_runner.go:130] > # Defaults to false.
	I0315 23:45:50.851345  108344 command_runner.go:130] > # device_ownership_from_security_context = false
	I0315 23:45:50.851355  108344 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0315 23:45:50.851365  108344 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0315 23:45:50.851371  108344 command_runner.go:130] > # hooks_dir = [
	I0315 23:45:50.851376  108344 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0315 23:45:50.851382  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851388  108344 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0315 23:45:50.851396  108344 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0315 23:45:50.851403  108344 command_runner.go:130] > # its default mounts from the following two files:
	I0315 23:45:50.851406  108344 command_runner.go:130] > #
	I0315 23:45:50.851414  108344 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0315 23:45:50.851422  108344 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0315 23:45:50.851429  108344 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0315 23:45:50.851435  108344 command_runner.go:130] > #
	I0315 23:45:50.851440  108344 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0315 23:45:50.851449  108344 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0315 23:45:50.851457  108344 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0315 23:45:50.851463  108344 command_runner.go:130] > #      only add mounts it finds in this file.
	I0315 23:45:50.851467  108344 command_runner.go:130] > #
	I0315 23:45:50.851471  108344 command_runner.go:130] > # default_mounts_file = ""
	I0315 23:45:50.851478  108344 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0315 23:45:50.851488  108344 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0315 23:45:50.851494  108344 command_runner.go:130] > pids_limit = 1024
	I0315 23:45:50.851501  108344 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0315 23:45:50.851509  108344 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0315 23:45:50.851518  108344 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0315 23:45:50.851528  108344 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0315 23:45:50.851534  108344 command_runner.go:130] > # log_size_max = -1
	I0315 23:45:50.851541  108344 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0315 23:45:50.851550  108344 command_runner.go:130] > # log_to_journald = false
	I0315 23:45:50.851559  108344 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0315 23:45:50.851566  108344 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0315 23:45:50.851571  108344 command_runner.go:130] > # Path to directory for container attach sockets.
	I0315 23:45:50.851578  108344 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0315 23:45:50.851583  108344 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0315 23:45:50.851589  108344 command_runner.go:130] > # bind_mount_prefix = ""
	I0315 23:45:50.851595  108344 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0315 23:45:50.851600  108344 command_runner.go:130] > # read_only = false
	I0315 23:45:50.851607  108344 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0315 23:45:50.851615  108344 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0315 23:45:50.851621  108344 command_runner.go:130] > # live configuration reload.
	I0315 23:45:50.851625  108344 command_runner.go:130] > # log_level = "info"
	I0315 23:45:50.851633  108344 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0315 23:45:50.851638  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.851644  108344 command_runner.go:130] > # log_filter = ""
	I0315 23:45:50.851651  108344 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0315 23:45:50.851660  108344 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0315 23:45:50.851666  108344 command_runner.go:130] > # separated by comma.
	I0315 23:45:50.851673  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851679  108344 command_runner.go:130] > # uid_mappings = ""
	I0315 23:45:50.851685  108344 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0315 23:45:50.851694  108344 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0315 23:45:50.851701  108344 command_runner.go:130] > # separated by comma.
	I0315 23:45:50.851708  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851714  108344 command_runner.go:130] > # gid_mappings = ""
	I0315 23:45:50.851720  108344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0315 23:45:50.851729  108344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 23:45:50.851736  108344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 23:45:50.851745  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851751  108344 command_runner.go:130] > # minimum_mappable_uid = -1
	I0315 23:45:50.851757  108344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0315 23:45:50.851765  108344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 23:45:50.851773  108344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 23:45:50.851781  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851787  108344 command_runner.go:130] > # minimum_mappable_gid = -1
	I0315 23:45:50.851793  108344 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0315 23:45:50.851803  108344 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0315 23:45:50.851811  108344 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0315 23:45:50.851818  108344 command_runner.go:130] > # ctr_stop_timeout = 30
	I0315 23:45:50.851823  108344 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0315 23:45:50.851831  108344 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0315 23:45:50.851838  108344 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0315 23:45:50.851843  108344 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0315 23:45:50.851849  108344 command_runner.go:130] > drop_infra_ctr = false
	I0315 23:45:50.851855  108344 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0315 23:45:50.851863  108344 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0315 23:45:50.851871  108344 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0315 23:45:50.851877  108344 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0315 23:45:50.851888  108344 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0315 23:45:50.851897  108344 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0315 23:45:50.851905  108344 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0315 23:45:50.851912  108344 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0315 23:45:50.851916  108344 command_runner.go:130] > # shared_cpuset = ""
	I0315 23:45:50.851924  108344 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0315 23:45:50.851932  108344 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0315 23:45:50.851938  108344 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0315 23:45:50.851945  108344 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0315 23:45:50.851951  108344 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0315 23:45:50.851957  108344 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0315 23:45:50.851965  108344 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0315 23:45:50.851972  108344 command_runner.go:130] > # enable_criu_support = false
	I0315 23:45:50.851978  108344 command_runner.go:130] > # Enable/disable the generation of the container,
	I0315 23:45:50.851986  108344 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0315 23:45:50.851991  108344 command_runner.go:130] > # enable_pod_events = false
	I0315 23:45:50.852000  108344 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 23:45:50.852008  108344 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 23:45:50.852013  108344 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0315 23:45:50.852017  108344 command_runner.go:130] > # default_runtime = "runc"
	I0315 23:45:50.852024  108344 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0315 23:45:50.852032  108344 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0315 23:45:50.852042  108344 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0315 23:45:50.852051  108344 command_runner.go:130] > # creation as a file is not desired either.
	I0315 23:45:50.852061  108344 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0315 23:45:50.852069  108344 command_runner.go:130] > # the hostname is being managed dynamically.
	I0315 23:45:50.852073  108344 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0315 23:45:50.852079  108344 command_runner.go:130] > # ]
	I0315 23:45:50.852085  108344 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0315 23:45:50.852093  108344 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0315 23:45:50.852105  108344 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0315 23:45:50.852112  108344 command_runner.go:130] > # Each entry in the table should follow the format:
	I0315 23:45:50.852117  108344 command_runner.go:130] > #
	I0315 23:45:50.852122  108344 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0315 23:45:50.852129  108344 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0315 23:45:50.852133  108344 command_runner.go:130] > # runtime_type = "oci"
	I0315 23:45:50.852157  108344 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0315 23:45:50.852165  108344 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0315 23:45:50.852169  108344 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0315 23:45:50.852176  108344 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0315 23:45:50.852180  108344 command_runner.go:130] > # monitor_env = []
	I0315 23:45:50.852186  108344 command_runner.go:130] > # privileged_without_host_devices = false
	I0315 23:45:50.852190  108344 command_runner.go:130] > # allowed_annotations = []
	I0315 23:45:50.852198  108344 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0315 23:45:50.852201  108344 command_runner.go:130] > # Where:
	I0315 23:45:50.852208  108344 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0315 23:45:50.852215  108344 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0315 23:45:50.852223  108344 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0315 23:45:50.852231  108344 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0315 23:45:50.852237  108344 command_runner.go:130] > #   in $PATH.
	I0315 23:45:50.852243  108344 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0315 23:45:50.852251  108344 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0315 23:45:50.852260  108344 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0315 23:45:50.852266  108344 command_runner.go:130] > #   state.
	I0315 23:45:50.852273  108344 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0315 23:45:50.852280  108344 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0315 23:45:50.852289  108344 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0315 23:45:50.852294  108344 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0315 23:45:50.852302  108344 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0315 23:45:50.852311  108344 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0315 23:45:50.852318  108344 command_runner.go:130] > #   The currently recognized values are:
	I0315 23:45:50.852326  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0315 23:45:50.852334  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0315 23:45:50.852342  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0315 23:45:50.852349  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0315 23:45:50.852357  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0315 23:45:50.852365  108344 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0315 23:45:50.852374  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0315 23:45:50.852382  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0315 23:45:50.852388  108344 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0315 23:45:50.852396  108344 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0315 23:45:50.852401  108344 command_runner.go:130] > #   deprecated option "conmon".
	I0315 23:45:50.852407  108344 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0315 23:45:50.852414  108344 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0315 23:45:50.852420  108344 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0315 23:45:50.852428  108344 command_runner.go:130] > #   should be moved to the container's cgroup
	I0315 23:45:50.852435  108344 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0315 23:45:50.852441  108344 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0315 23:45:50.852448  108344 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0315 23:45:50.852455  108344 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0315 23:45:50.852458  108344 command_runner.go:130] > #
	I0315 23:45:50.852463  108344 command_runner.go:130] > # Using the seccomp notifier feature:
	I0315 23:45:50.852469  108344 command_runner.go:130] > #
	I0315 23:45:50.852474  108344 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0315 23:45:50.852483  108344 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0315 23:45:50.852486  108344 command_runner.go:130] > #
	I0315 23:45:50.852492  108344 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0315 23:45:50.852501  108344 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0315 23:45:50.852507  108344 command_runner.go:130] > #
	I0315 23:45:50.852513  108344 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0315 23:45:50.852519  108344 command_runner.go:130] > # feature.
	I0315 23:45:50.852522  108344 command_runner.go:130] > #
	I0315 23:45:50.852530  108344 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0315 23:45:50.852538  108344 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0315 23:45:50.852544  108344 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0315 23:45:50.852552  108344 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0315 23:45:50.852562  108344 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0315 23:45:50.852568  108344 command_runner.go:130] > #
	I0315 23:45:50.852573  108344 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0315 23:45:50.852582  108344 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0315 23:45:50.852587  108344 command_runner.go:130] > #
	I0315 23:45:50.852592  108344 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0315 23:45:50.852600  108344 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0315 23:45:50.852603  108344 command_runner.go:130] > #
	I0315 23:45:50.852612  108344 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0315 23:45:50.852620  108344 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0315 23:45:50.852626  108344 command_runner.go:130] > # limitation.
	I0315 23:45:50.852630  108344 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0315 23:45:50.852637  108344 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0315 23:45:50.852640  108344 command_runner.go:130] > runtime_type = "oci"
	I0315 23:45:50.852646  108344 command_runner.go:130] > runtime_root = "/run/runc"
	I0315 23:45:50.852650  108344 command_runner.go:130] > runtime_config_path = ""
	I0315 23:45:50.852657  108344 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0315 23:45:50.852661  108344 command_runner.go:130] > monitor_cgroup = "pod"
	I0315 23:45:50.852664  108344 command_runner.go:130] > monitor_exec_cgroup = ""
	I0315 23:45:50.852669  108344 command_runner.go:130] > monitor_env = [
	I0315 23:45:50.852675  108344 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 23:45:50.852680  108344 command_runner.go:130] > ]
	I0315 23:45:50.852684  108344 command_runner.go:130] > privileged_without_host_devices = false
	I0315 23:45:50.852692  108344 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0315 23:45:50.852700  108344 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0315 23:45:50.852706  108344 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0315 23:45:50.852715  108344 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0315 23:45:50.852725  108344 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0315 23:45:50.852733  108344 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0315 23:45:50.852743  108344 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0315 23:45:50.852752  108344 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0315 23:45:50.852757  108344 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0315 23:45:50.852767  108344 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0315 23:45:50.852773  108344 command_runner.go:130] > # Example:
	I0315 23:45:50.852777  108344 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0315 23:45:50.852783  108344 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0315 23:45:50.852788  108344 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0315 23:45:50.852793  108344 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0315 23:45:50.852798  108344 command_runner.go:130] > # cpuset = 0
	I0315 23:45:50.852802  108344 command_runner.go:130] > # cpushares = "0-1"
	I0315 23:45:50.852805  108344 command_runner.go:130] > # Where:
	I0315 23:45:50.852809  108344 command_runner.go:130] > # The workload name is workload-type.
	I0315 23:45:50.852815  108344 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0315 23:45:50.852820  108344 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0315 23:45:50.852825  108344 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0315 23:45:50.852832  108344 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0315 23:45:50.852838  108344 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0315 23:45:50.852843  108344 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0315 23:45:50.852848  108344 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0315 23:45:50.852852  108344 command_runner.go:130] > # Default value is set to true
	I0315 23:45:50.852856  108344 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0315 23:45:50.852861  108344 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0315 23:45:50.852866  108344 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0315 23:45:50.852870  108344 command_runner.go:130] > # Default value is set to 'false'
	I0315 23:45:50.852874  108344 command_runner.go:130] > # disable_hostport_mapping = false
	I0315 23:45:50.852880  108344 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0315 23:45:50.852883  108344 command_runner.go:130] > #
	I0315 23:45:50.852887  108344 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0315 23:45:50.852893  108344 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0315 23:45:50.852899  108344 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0315 23:45:50.852904  108344 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0315 23:45:50.852909  108344 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0315 23:45:50.852912  108344 command_runner.go:130] > [crio.image]
	I0315 23:45:50.852918  108344 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0315 23:45:50.852922  108344 command_runner.go:130] > # default_transport = "docker://"
	I0315 23:45:50.852928  108344 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0315 23:45:50.852934  108344 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0315 23:45:50.852938  108344 command_runner.go:130] > # global_auth_file = ""
	I0315 23:45:50.852942  108344 command_runner.go:130] > # The image used to instantiate infra containers.
	I0315 23:45:50.852947  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.852951  108344 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0315 23:45:50.852957  108344 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0315 23:45:50.852963  108344 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0315 23:45:50.852967  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.852973  108344 command_runner.go:130] > # pause_image_auth_file = ""
	I0315 23:45:50.852981  108344 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0315 23:45:50.852987  108344 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0315 23:45:50.852995  108344 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0315 23:45:50.853001  108344 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0315 23:45:50.853007  108344 command_runner.go:130] > # pause_command = "/pause"
	I0315 23:45:50.853013  108344 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0315 23:45:50.853021  108344 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0315 23:45:50.853028  108344 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0315 23:45:50.853034  108344 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0315 23:45:50.853043  108344 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0315 23:45:50.853051  108344 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0315 23:45:50.853057  108344 command_runner.go:130] > # pinned_images = [
	I0315 23:45:50.853060  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853068  108344 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0315 23:45:50.853077  108344 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0315 23:45:50.853083  108344 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0315 23:45:50.853091  108344 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0315 23:45:50.853101  108344 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0315 23:45:50.853107  108344 command_runner.go:130] > # signature_policy = ""
	I0315 23:45:50.853113  108344 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0315 23:45:50.853121  108344 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0315 23:45:50.853129  108344 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0315 23:45:50.853136  108344 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0315 23:45:50.853144  108344 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0315 23:45:50.853150  108344 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0315 23:45:50.853158  108344 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0315 23:45:50.853165  108344 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0315 23:45:50.853171  108344 command_runner.go:130] > # changing them here.
	I0315 23:45:50.853175  108344 command_runner.go:130] > # insecure_registries = [
	I0315 23:45:50.853181  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853187  108344 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0315 23:45:50.853194  108344 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0315 23:45:50.853198  108344 command_runner.go:130] > # image_volumes = "mkdir"
	I0315 23:45:50.853206  108344 command_runner.go:130] > # Temporary directory to use for storing big files
	I0315 23:45:50.853212  108344 command_runner.go:130] > # big_files_temporary_dir = ""
	I0315 23:45:50.853218  108344 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0315 23:45:50.853226  108344 command_runner.go:130] > # CNI plugins.
	I0315 23:45:50.853232  108344 command_runner.go:130] > [crio.network]
	I0315 23:45:50.853238  108344 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0315 23:45:50.853246  108344 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0315 23:45:50.853250  108344 command_runner.go:130] > # cni_default_network = ""
	I0315 23:45:50.853258  108344 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0315 23:45:50.853265  108344 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0315 23:45:50.853270  108344 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0315 23:45:50.853276  108344 command_runner.go:130] > # plugin_dirs = [
	I0315 23:45:50.853280  108344 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0315 23:45:50.853285  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853291  108344 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0315 23:45:50.853297  108344 command_runner.go:130] > [crio.metrics]
	I0315 23:45:50.853302  108344 command_runner.go:130] > # Globally enable or disable metrics support.
	I0315 23:45:50.853308  108344 command_runner.go:130] > enable_metrics = true
	I0315 23:45:50.853313  108344 command_runner.go:130] > # Specify enabled metrics collectors.
	I0315 23:45:50.853321  108344 command_runner.go:130] > # Per default all metrics are enabled.
	I0315 23:45:50.853329  108344 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0315 23:45:50.853335  108344 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0315 23:45:50.853344  108344 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0315 23:45:50.853348  108344 command_runner.go:130] > # metrics_collectors = [
	I0315 23:45:50.853352  108344 command_runner.go:130] > # 	"operations",
	I0315 23:45:50.853359  108344 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0315 23:45:50.853364  108344 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0315 23:45:50.853371  108344 command_runner.go:130] > # 	"operations_errors",
	I0315 23:45:50.853375  108344 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0315 23:45:50.853381  108344 command_runner.go:130] > # 	"image_pulls_by_name",
	I0315 23:45:50.853385  108344 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0315 23:45:50.853392  108344 command_runner.go:130] > # 	"image_pulls_failures",
	I0315 23:45:50.853396  108344 command_runner.go:130] > # 	"image_pulls_successes",
	I0315 23:45:50.853402  108344 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0315 23:45:50.853407  108344 command_runner.go:130] > # 	"image_layer_reuse",
	I0315 23:45:50.853413  108344 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0315 23:45:50.853417  108344 command_runner.go:130] > # 	"containers_oom_total",
	I0315 23:45:50.853421  108344 command_runner.go:130] > # 	"containers_oom",
	I0315 23:45:50.853427  108344 command_runner.go:130] > # 	"processes_defunct",
	I0315 23:45:50.853431  108344 command_runner.go:130] > # 	"operations_total",
	I0315 23:45:50.853437  108344 command_runner.go:130] > # 	"operations_latency_seconds",
	I0315 23:45:50.853442  108344 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0315 23:45:50.853448  108344 command_runner.go:130] > # 	"operations_errors_total",
	I0315 23:45:50.853452  108344 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0315 23:45:50.853458  108344 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0315 23:45:50.853462  108344 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0315 23:45:50.853468  108344 command_runner.go:130] > # 	"image_pulls_success_total",
	I0315 23:45:50.853472  108344 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0315 23:45:50.853478  108344 command_runner.go:130] > # 	"containers_oom_count_total",
	I0315 23:45:50.853486  108344 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0315 23:45:50.853492  108344 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0315 23:45:50.853496  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853500  108344 command_runner.go:130] > # The port on which the metrics server will listen.
	I0315 23:45:50.853506  108344 command_runner.go:130] > # metrics_port = 9090
	I0315 23:45:50.853512  108344 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0315 23:45:50.853517  108344 command_runner.go:130] > # metrics_socket = ""
	I0315 23:45:50.853522  108344 command_runner.go:130] > # The certificate for the secure metrics server.
	I0315 23:45:50.853530  108344 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0315 23:45:50.853540  108344 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0315 23:45:50.853546  108344 command_runner.go:130] > # certificate on any modification event.
	I0315 23:45:50.853550  108344 command_runner.go:130] > # metrics_cert = ""
	I0315 23:45:50.853561  108344 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0315 23:45:50.853568  108344 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0315 23:45:50.853573  108344 command_runner.go:130] > # metrics_key = ""
	I0315 23:45:50.853580  108344 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0315 23:45:50.853586  108344 command_runner.go:130] > [crio.tracing]
	I0315 23:45:50.853592  108344 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0315 23:45:50.853598  108344 command_runner.go:130] > # enable_tracing = false
	I0315 23:45:50.853603  108344 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0315 23:45:50.853609  108344 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0315 23:45:50.853616  108344 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0315 23:45:50.853623  108344 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0315 23:45:50.853641  108344 command_runner.go:130] > # CRI-O NRI configuration.
	I0315 23:45:50.853645  108344 command_runner.go:130] > [crio.nri]
	I0315 23:45:50.853650  108344 command_runner.go:130] > # Globally enable or disable NRI.
	I0315 23:45:50.853654  108344 command_runner.go:130] > # enable_nri = false
	I0315 23:45:50.853660  108344 command_runner.go:130] > # NRI socket to listen on.
	I0315 23:45:50.853665  108344 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0315 23:45:50.853672  108344 command_runner.go:130] > # NRI plugin directory to use.
	I0315 23:45:50.853676  108344 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0315 23:45:50.853683  108344 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0315 23:45:50.853689  108344 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0315 23:45:50.853696  108344 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0315 23:45:50.853703  108344 command_runner.go:130] > # nri_disable_connections = false
	I0315 23:45:50.853708  108344 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0315 23:45:50.853716  108344 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0315 23:45:50.853721  108344 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0315 23:45:50.853727  108344 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0315 23:45:50.853733  108344 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0315 23:45:50.853739  108344 command_runner.go:130] > [crio.stats]
	I0315 23:45:50.853744  108344 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0315 23:45:50.853754  108344 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0315 23:45:50.853760  108344 command_runner.go:130] > # stats_collection_period = 0
	I0315 23:45:50.853908  108344 cni.go:84] Creating CNI manager for ""
	I0315 23:45:50.853923  108344 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 23:45:50.853935  108344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 23:45:50.853961  108344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-658614 NodeName:multinode-658614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 23:45:50.854130  108344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-658614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 23:45:50.854196  108344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:45:50.865054  108344 command_runner.go:130] > kubeadm
	I0315 23:45:50.865081  108344 command_runner.go:130] > kubectl
	I0315 23:45:50.865085  108344 command_runner.go:130] > kubelet
	I0315 23:45:50.865111  108344 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 23:45:50.865160  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 23:45:50.875682  108344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0315 23:45:50.894627  108344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:45:50.912021  108344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0315 23:45:50.928642  108344 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I0315 23:45:50.932319  108344 command_runner.go:130] > 192.168.39.5	control-plane.minikube.internal
	I0315 23:45:50.932443  108344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:45:51.090396  108344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:45:51.105649  108344 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614 for IP: 192.168.39.5
	I0315 23:45:51.105676  108344 certs.go:194] generating shared ca certs ...
	I0315 23:45:51.105694  108344 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:45:51.105852  108344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:45:51.105893  108344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:45:51.105903  108344 certs.go:256] generating profile certs ...
	I0315 23:45:51.105982  108344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/client.key
	I0315 23:45:51.106049  108344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.key.cf0e8e33
	I0315 23:45:51.106123  108344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.key
	I0315 23:45:51.106139  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:45:51.106157  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:45:51.106168  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:45:51.106179  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:45:51.106188  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:45:51.106200  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:45:51.106213  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:45:51.106223  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:45:51.106271  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:45:51.106305  108344 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:45:51.106315  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:45:51.106336  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:45:51.106358  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:45:51.106381  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:45:51.106419  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:45:51.106442  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.106455  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.106467  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.107010  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:45:51.134098  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:45:51.159098  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:45:51.184170  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:45:51.210428  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 23:45:51.235143  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:45:51.260488  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:45:51.285937  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:45:51.310353  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:45:51.334720  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:45:51.359551  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:45:51.384053  108344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 23:45:51.401067  108344 ssh_runner.go:195] Run: openssl version
	I0315 23:45:51.407868  108344 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0315 23:45:51.408005  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:45:51.422915  108344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.427778  108344 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.427825  108344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.427872  108344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.433651  108344 command_runner.go:130] > 3ec20f2e
	I0315 23:45:51.433720  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:45:51.443714  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:45:51.454991  108344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.459573  108344 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.459604  108344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.459662  108344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.465298  108344 command_runner.go:130] > b5213941
	I0315 23:45:51.465586  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:45:51.475612  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:45:51.486980  108344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.492113  108344 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.492134  108344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.492182  108344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.497783  108344 command_runner.go:130] > 51391683
	I0315 23:45:51.498047  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:45:51.508080  108344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:45:51.512802  108344 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:45:51.512832  108344 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0315 23:45:51.512841  108344 command_runner.go:130] > Device: 253,1	Inode: 3150397     Links: 1
	I0315 23:45:51.512851  108344 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 23:45:51.512867  108344 command_runner.go:130] > Access: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512877  108344 command_runner.go:130] > Modify: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512886  108344 command_runner.go:130] > Change: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512895  108344 command_runner.go:130] >  Birth: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512952  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 23:45:51.518594  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.518839  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 23:45:51.524497  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.524709  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 23:45:51.530492  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.530552  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 23:45:51.536519  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.536580  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 23:45:51.542454  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.542498  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 23:45:51.548338  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.548390  108344 kubeadm.go:391] StartCluster: {Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:45:51.548525  108344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 23:45:51.548603  108344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 23:45:51.585941  108344 command_runner.go:130] > c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a
	I0315 23:45:51.585965  108344 command_runner.go:130] > 06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395
	I0315 23:45:51.585971  108344 command_runner.go:130] > 909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b
	I0315 23:45:51.585980  108344 command_runner.go:130] > c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c
	I0315 23:45:51.586118  108344 command_runner.go:130] > c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1
	I0315 23:45:51.586144  108344 command_runner.go:130] > 4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e
	I0315 23:45:51.586200  108344 command_runner.go:130] > 632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870
	I0315 23:45:51.586280  108344 command_runner.go:130] > 83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02
	I0315 23:45:51.588116  108344 cri.go:89] found id: "c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a"
	I0315 23:45:51.588134  108344 cri.go:89] found id: "06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395"
	I0315 23:45:51.588139  108344 cri.go:89] found id: "909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b"
	I0315 23:45:51.588142  108344 cri.go:89] found id: "c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c"
	I0315 23:45:51.588145  108344 cri.go:89] found id: "c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1"
	I0315 23:45:51.588148  108344 cri.go:89] found id: "4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e"
	I0315 23:45:51.588151  108344 cri.go:89] found id: "632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870"
	I0315 23:45:51.588154  108344 cri.go:89] found id: "83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02"
	I0315 23:45:51.588156  108344 cri.go:89] found id: ""
	I0315 23:45:51.588198  108344 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.382149525Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546437382077889,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7b47e895-4db9-4862-9881-86876cf842b9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.382577781Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c4b3e7e6-7722-43ed-aa3c-adf2d58fa114 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.382668649Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c4b3e7e6-7722-43ed-aa3c-adf2d58fa114 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.383055982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c4b3e7e6-7722-43ed-aa3c-adf2d58fa114 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.428621140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85f1be10-4943-4dbb-808e-989647a782d5 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.428744483Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85f1be10-4943-4dbb-808e-989647a782d5 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.430321506Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=033ecace-ddba-4d42-88d9-bdf7b3219aea name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.430753962Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546437430733891,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=033ecace-ddba-4d42-88d9-bdf7b3219aea name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.431391133Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59288dc8-3058-48e5-927d-5ef8340db159 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.431449670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59288dc8-3058-48e5-927d-5ef8340db159 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.431787661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59288dc8-3058-48e5-927d-5ef8340db159 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.474733267Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b3938533-5aa0-4bf9-b8b2-ad7ddc697b58 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.474828027Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b3938533-5aa0-4bf9-b8b2-ad7ddc697b58 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.476077350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=97c3dce5-10cf-40d7-8339-e69dfc5e94ab name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.476564478Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546437476542568,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=97c3dce5-10cf-40d7-8339-e69dfc5e94ab name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.477198620Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3ea85a29-3ad0-42b3-94b9-5a5f549aac21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.477258657Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3ea85a29-3ad0-42b3-94b9-5a5f549aac21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.478185156Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3ea85a29-3ad0-42b3-94b9-5a5f549aac21 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.527372696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=feb7ba14-d6f8-45be-9504-805becb4fab2 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.527444925Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=feb7ba14-d6f8-45be-9504-805becb4fab2 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.528898993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=035b2543-237d-49a5-91e7-efec067baadc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.529396031Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546437529373377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=035b2543-237d-49a5-91e7-efec067baadc name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.530269129Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e414aa9a-bb4c-4406-8b4f-bd9e67cc706d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.530323521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e414aa9a-bb4c-4406-8b4f-bd9e67cc706d name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:47:17 multinode-658614 crio[2852]: time="2024-03-15 23:47:17.530660340Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e414aa9a-bb4c-4406-8b4f-bd9e67cc706d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	dc42d8471a949       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      46 seconds ago       Running             busybox                   1                   d09919244e9c2       busybox-5b5d89c9d6-92n6k
	cc0741cddf298       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   aaa54b04e6c90       kindnet-fbp4p
	0f0e444a07850       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   f80eb860efd40       coredns-5dd5756b68-svv8j
	d2b0d61df2eb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   4d751dda4b627       storage-provisioner
	90fceaff63bd1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   507f5f4ddce61       kube-proxy-htvcb
	0a5c1020bc422       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   3b953de6379c2       kube-controller-manager-multinode-658614
	a6bada5ba1ce5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   acfb76c388636       kube-scheduler-multinode-658614
	5db0e1e79f1b8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   010abb516e30f       kube-apiserver-multinode-658614
	e0260f4b557b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   98f9ff3db2c59       etcd-multinode-658614
	2e814843722b3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   8be786cb8d36a       busybox-5b5d89c9d6-92n6k
	c4de486f1575d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   cd8e591d04d83       coredns-5dd5756b68-svv8j
	06a5e9d3986b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   257fb4484d03d       storage-provisioner
	909ae95e34667       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   02c02adee17ad       kindnet-fbp4p
	c47cb7f221d82       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   af098ff74177e       kube-proxy-htvcb
	c906413d2f1ff       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   e1f0cefc865de       kube-scheduler-multinode-658614
	4241c39a188f5       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   d222d247f3f4b       kube-apiserver-multinode-658614
	632296766ea82       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   ce796ddbaee08       kube-controller-manager-multinode-658614
	83287ddb0e44f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   63c7df9fa65ef       etcd-multinode-658614
	
	
	==> coredns [0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51548 - 15259 "HINFO IN 6031034294552822681.752798960729125411. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.12685263s
	
	
	==> coredns [c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a] <==
	[INFO] 10.244.1.2:53078 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00163102s
	[INFO] 10.244.1.2:60668 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082566s
	[INFO] 10.244.1.2:56629 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111406s
	[INFO] 10.244.1.2:60947 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001004102s
	[INFO] 10.244.1.2:50606 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117165s
	[INFO] 10.244.1.2:60733 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110786s
	[INFO] 10.244.1.2:48408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068872s
	[INFO] 10.244.0.3:54069 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112242s
	[INFO] 10.244.0.3:43363 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108029s
	[INFO] 10.244.0.3:55139 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086624s
	[INFO] 10.244.0.3:34718 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001326s
	[INFO] 10.244.1.2:39766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156811s
	[INFO] 10.244.1.2:53188 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110977s
	[INFO] 10.244.1.2:40890 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117994s
	[INFO] 10.244.1.2:56357 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106009s
	[INFO] 10.244.0.3:51288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116825s
	[INFO] 10.244.0.3:56174 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000178603s
	[INFO] 10.244.0.3:53162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135355s
	[INFO] 10.244.0.3:48700 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079412s
	[INFO] 10.244.1.2:45984 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154255s
	[INFO] 10.244.1.2:54555 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201559s
	[INFO] 10.244.1.2:41306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093647s
	[INFO] 10.244.1.2:34839 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-658614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-658614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=multinode-658614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T23_39_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-658614
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:47:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    multinode-658614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 368493f35fdf43e6a1515de1070ad8f9
	  System UUID:                368493f3-5fdf-43e6-a151-5de1070ad8f9
	  Boot ID:                    5e2adf93-f7fb-413e-8e50-0831904af602
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-92n6k                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m36s
	  kube-system                 coredns-5dd5756b68-svv8j                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m25s
	  kube-system                 etcd-multinode-658614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m38s
	  kube-system                 kindnet-fbp4p                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m25s
	  kube-system                 kube-apiserver-multinode-658614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-controller-manager-multinode-658614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 kube-proxy-htvcb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	  kube-system                 kube-scheduler-multinode-658614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m38s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m24s              kube-proxy       
	  Normal  Starting                 79s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m38s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m38s              kubelet          Node multinode-658614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m38s              kubelet          Node multinode-658614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m38s              kubelet          Node multinode-658614 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m38s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m25s              node-controller  Node multinode-658614 event: Registered Node multinode-658614 in Controller
	  Normal  NodeReady                7m20s              kubelet          Node multinode-658614 status is now: NodeReady
	  Normal  Starting                 84s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  84s (x8 over 84s)  kubelet          Node multinode-658614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    84s (x8 over 84s)  kubelet          Node multinode-658614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     84s (x7 over 84s)  kubelet          Node multinode-658614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  84s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           68s                node-controller  Node multinode-658614 event: Registered Node multinode-658614 in Controller
	
	
	Name:               multinode-658614-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-658614-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=multinode-658614
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_46_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:46:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-658614-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:47:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:46:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:46:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:46:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:46:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-658614-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 462735656ec24d819e5b2ae09c8dcc97
	  System UUID:                46273565-6ec2-4d81-9e5b-2ae09c8dcc97
	  Boot ID:                    ca6582ea-21cb-41f7-9eb4-1b75bd144789
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ljljd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-f9785               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m46s
	  kube-system                 kube-proxy-ph8fc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m42s                  kube-proxy       
	  Normal  Starting                 35s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m46s (x5 over 6m47s)  kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s (x5 over 6m47s)  kubelet          Node multinode-658614-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m46s (x5 over 6m47s)  kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m38s                  kubelet          Node multinode-658614-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  38s (x5 over 39s)      kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x5 over 39s)      kubelet          Node multinode-658614-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x5 over 39s)      kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s                    node-controller  Node multinode-658614-m02 event: Registered Node multinode-658614-m02 in Controller
	  Normal  NodeReady                31s                    kubelet          Node multinode-658614-m02 status is now: NodeReady
	
	
	Name:               multinode-658614-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-658614-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=multinode-658614
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_47_08_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:47:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-658614-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:47:14 +0000   Fri, 15 Mar 2024 23:47:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:47:14 +0000   Fri, 15 Mar 2024 23:47:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:47:14 +0000   Fri, 15 Mar 2024 23:47:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:47:14 +0000   Fri, 15 Mar 2024 23:47:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.156
	  Hostname:    multinode-658614-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 247885cf073743f3b802daa385eb0acf
	  System UUID:                247885cf-0737-43f3-b802-daa385eb0acf
	  Boot ID:                    8784c9bb-b665-4317-8550-8b7c90a34847
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-w9gns       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m
	  kube-system                 kube-proxy-lfstz    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  Starting                 5m15s                  kube-proxy  
	  Normal  Starting                 5m55s                  kube-proxy  
	  Normal  Starting                 7s                     kube-proxy  
	  Normal  NodeHasNoDiskPressure    6m (x5 over 6m1s)      kubelet     Node multinode-658614-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m (x5 over 6m1s)      kubelet     Node multinode-658614-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m (x5 over 6m1s)      kubelet     Node multinode-658614-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m51s                  kubelet     Node multinode-658614-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m18s (x5 over 5m19s)  kubelet     Node multinode-658614-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m18s (x5 over 5m19s)  kubelet     Node multinode-658614-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  5m18s (x5 over 5m19s)  kubelet     Node multinode-658614-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m11s                  kubelet     Node multinode-658614-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  10s (x5 over 11s)      kubelet     Node multinode-658614-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10s (x5 over 11s)      kubelet     Node multinode-658614-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10s (x5 over 11s)      kubelet     Node multinode-658614-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3s                     kubelet     Node multinode-658614-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.177690] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.135727] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.748079] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +0.061271] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.219362] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.037450] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.229095] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.074710] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.527421] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.162074] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +5.495006] kauditd_printk_skb: 70 callbacks suppressed
	[Mar15 23:40] kauditd_printk_skb: 4 callbacks suppressed
	[Mar15 23:45] systemd-fstab-generator[2775]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.167553] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.149284] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.246181] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +6.090323] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[  +0.083738] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.713553] systemd-fstab-generator[3062]: Ignoring "noauto" option for root device
	[  +4.701931] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 23:46] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.834913] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	[ +17.929648] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02] <==
	{"level":"info","ts":"2024-03-15T23:39:34.663773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.5:2379"}
	{"level":"info","ts":"2024-03-15T23:39:34.666535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:39:34.666763Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:39:34.669206Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:39:34.667186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T23:39:34.670176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T23:39:34.670214Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T23:39:34.677621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-03-15T23:41:16.516378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.693981ms","expected-duration":"100ms","prefix":"","request":"header:<ID:154123237053274849 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-jldtv\" mod_revision:571 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-jldtv\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-jldtv\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T23:41:16.517343Z","caller":"traceutil/trace.go:171","msg":"trace[61032887] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"205.790238ms","start":"2024-03-15T23:41:16.311496Z","end":"2024-03-15T23:41:16.517287Z","steps":["trace[61032887] 'process raft request'  (duration: 53.255979ms)","trace[61032887] 'compare'  (duration: 150.569607ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T23:41:20.817277Z","caller":"traceutil/trace.go:171","msg":"trace[857755669] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"116.767561ms","start":"2024-03-15T23:41:20.700487Z","end":"2024-03-15T23:41:20.817255Z","steps":["trace[857755669] 'read index received'  (duration: 116.542026ms)","trace[857755669] 'applied index is now lower than readState.Index'  (duration: 225.126µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T23:41:20.817538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.026022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T23:41:20.817645Z","caller":"traceutil/trace.go:171","msg":"trace[2024657246] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:600; }","duration":"117.169169ms","start":"2024-03-15T23:41:20.700462Z","end":"2024-03-15T23:41:20.817631Z","steps":["trace[2024657246] 'agreement among raft nodes before linearized reading'  (duration: 117.005368ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:41:20.81756Z","caller":"traceutil/trace.go:171","msg":"trace[1480458444] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"220.291088ms","start":"2024-03-15T23:41:20.597246Z","end":"2024-03-15T23:41:20.817537Z","steps":["trace[1480458444] 'process raft request'  (duration: 219.829193ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:41:21.236646Z","caller":"traceutil/trace.go:171","msg":"trace[1257671243] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"142.999904ms","start":"2024-03-15T23:41:21.093631Z","end":"2024-03-15T23:41:21.236631Z","steps":["trace[1257671243] 'process raft request'  (duration: 142.870186ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:44:12.766553Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T23:44:12.766748Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-658614","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	{"level":"warn","ts":"2024-03-15T23:44:12.767011Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:44:12.776259Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:44:12.849042Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:44:12.849162Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T23:44:12.849325Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c5263387c79c0223","current-leader-member-id":"c5263387c79c0223"}
	{"level":"info","ts":"2024-03-15T23:44:12.852469Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:44:12.852643Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:44:12.852681Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-658614","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> etcd [e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03] <==
	{"level":"info","ts":"2024-03-15T23:45:54.102387Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T23:45:54.1025Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T23:45:54.103419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 switched to configuration voters=(14206098732849300003)"}
	{"level":"info","ts":"2024-03-15T23:45:54.103646Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","added-peer-id":"c5263387c79c0223","added-peer-peer-urls":["https://192.168.39.5:2380"]}
	{"level":"info","ts":"2024-03-15T23:45:54.104016Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:45:54.106219Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:45:54.128991Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T23:45:54.129322Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c5263387c79c0223","initial-advertise-peer-urls":["https://192.168.39.5:2380"],"listen-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T23:45:54.129379Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T23:45:54.129465Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:45:54.129472Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:45:55.362441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-15T23:45:55.362482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-15T23:45:55.362496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgPreVoteResp from c5263387c79c0223 at term 2"}
	{"level":"info","ts":"2024-03-15T23:45:55.362507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became candidate at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.362513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgVoteResp from c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.362522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became leader at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.362547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c5263387c79c0223 elected leader c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.36528Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c5263387c79c0223","local-member-attributes":"{Name:multinode-658614 ClientURLs:[https://192.168.39.5:2379]}","request-path":"/0/members/c5263387c79c0223/attributes","cluster-id":"436188ec3031a10e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T23:45:55.365293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T23:45:55.365531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T23:45:55.366899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T23:45:55.367202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T23:45:55.367235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T23:45:55.366901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.5:2379"}
	
	
	==> kernel <==
	 23:47:18 up 8 min,  0 users,  load average: 0.15, 0.18, 0.10
	Linux multinode-658614 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b] <==
	I0315 23:43:27.392321       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:43:37.407644       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:43:37.407706       1 main.go:227] handling current node
	I0315 23:43:37.407721       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:43:37.407730       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:43:37.407886       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:43:37.407924       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:43:47.424045       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:43:47.424261       1 main.go:227] handling current node
	I0315 23:43:47.424369       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:43:47.424397       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:43:47.424740       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:43:47.424831       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:43:57.431806       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:43:57.431965       1 main.go:227] handling current node
	I0315 23:43:57.432003       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:43:57.432085       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:43:57.432404       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:43:57.432436       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:44:07.446336       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:44:07.446495       1 main.go:227] handling current node
	I0315 23:44:07.446532       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:44:07.446555       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:44:07.446723       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:44:07.446743       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291] <==
	I0315 23:46:28.685388       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:46:28.685532       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:46:28.685537       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:46:38.697418       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:46:38.697465       1 main.go:227] handling current node
	I0315 23:46:38.697487       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:46:38.697493       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:46:48.702957       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:46:48.703012       1 main.go:227] handling current node
	I0315 23:46:48.703044       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:46:48.703053       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:46:48.703282       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:46:48.703324       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:46:58.759644       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:46:58.759802       1 main.go:227] handling current node
	I0315 23:46:58.759878       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:46:58.759913       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:46:58.760043       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:46:58.760064       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:47:08.770727       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:47:08.770885       1 main.go:227] handling current node
	I0315 23:47:08.770916       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:47:08.770935       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:47:08.771082       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:47:08.771190       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e] <==
	W0315 23:44:12.768583       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.768656       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.768903       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.770288       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.785584       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.790092       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.790615       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.791624       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.792608       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.792920       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.793032       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.793153       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.796565       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.796652       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798499       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798692       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798772       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798854       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.799842       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.799938       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800000       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800053       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800166       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800408       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0315 23:44:12.803490       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f] <==
	I0315 23:45:56.766689       1 establishing_controller.go:76] Starting EstablishingController
	I0315 23:45:56.766801       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0315 23:45:56.766902       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0315 23:45:56.767003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 23:45:56.831285       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 23:45:56.843703       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 23:45:56.852752       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 23:45:56.860907       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 23:45:56.861091       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 23:45:56.863211       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 23:45:56.863220       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 23:45:56.864182       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 23:45:56.864826       1 aggregator.go:166] initial CRD sync complete...
	I0315 23:45:56.864878       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 23:45:56.864901       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 23:45:56.864924       1 cache.go:39] Caches are synced for autoregister controller
	I0315 23:45:56.885808       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0315 23:45:57.765952       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 23:45:59.422152       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 23:45:59.553225       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0315 23:45:59.564794       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0315 23:45:59.635909       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 23:45:59.646775       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 23:46:09.464284       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 23:46:09.567477       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507] <==
	I0315 23:46:33.964542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.335823ms"
	I0315 23:46:33.964657       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="42.04µs"
	I0315 23:46:39.367656       1 event.go:307] "Event occurred" object="multinode-658614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-658614-m02 event: Removing Node multinode-658614-m02 from Controller"
	I0315 23:46:39.691335       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m02\" does not exist"
	I0315 23:46:39.691766       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-r8z86" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-r8z86"
	I0315 23:46:39.699820       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m02" podCIDRs=["10.244.1.0/24"]
	I0315 23:46:39.913692       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.903µs"
	I0315 23:46:40.176514       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="49.824µs"
	I0315 23:46:40.207020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="89.973µs"
	I0315 23:46:40.219651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="111.593µs"
	I0315 23:46:40.248071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="117.836µs"
	I0315 23:46:40.256765       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.395µs"
	I0315 23:46:40.259782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.412µs"
	I0315 23:46:44.368262       1 event.go:307] "Event occurred" object="multinode-658614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-658614-m02 event: Registered Node multinode-658614-m02 in Controller"
	I0315 23:46:46.886780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:46:46.907783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="70.992µs"
	I0315 23:46:46.923163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.114µs"
	I0315 23:46:49.381397       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ljljd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ljljd"
	I0315 23:46:49.471279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.748032ms"
	I0315 23:46:49.472408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.664µs"
	I0315 23:47:05.097402       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:47:07.718775       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m03\" does not exist"
	I0315 23:47:07.721277       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:47:07.758650       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m03" podCIDRs=["10.244.2.0/24"]
	I0315 23:47:14.366321       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	
	
	==> kube-controller-manager [632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870] <==
	I0315 23:41:17.814595       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m03\" does not exist"
	I0315 23:41:17.814719       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:17.831219       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m03" podCIDRs=["10.244.2.0/24"]
	I0315 23:41:17.853340       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lfstz"
	I0315 23:41:17.861693       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w9gns"
	I0315 23:41:22.584640       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-658614-m03"
	I0315 23:41:22.584914       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-658614-m03 event: Registered Node multinode-658614-m03 in Controller"
	I0315 23:41:26.132281       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:57.558488       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:57.606499       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-658614-m03 event: Removing Node multinode-658614-m03 from Controller"
	I0315 23:41:59.992521       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:59.992631       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m03\" does not exist"
	I0315 23:42:00.014801       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m03" podCIDRs=["10.244.3.0/24"]
	I0315 23:42:02.607968       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-658614-m03 event: Registered Node multinode-658614-m03 in Controller"
	I0315 23:42:06.839470       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m03"
	I0315 23:42:47.641439       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:42:47.642244       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-658614-m03 status is now: NodeNotReady"
	I0315 23:42:47.650947       1 event.go:307] "Event occurred" object="multinode-658614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-658614-m02 status is now: NodeNotReady"
	I0315 23:42:47.662296       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lfstz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.669477       1 event.go:307] "Event occurred" object="kube-system/kindnet-f9785" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.677607       1 event.go:307] "Event occurred" object="kube-system/kindnet-w9gns" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.685155       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ph8fc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.700933       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-r8z86" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.715047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.817959ms"
	I0315 23:42:47.715823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="133.876µs"
	
	
	==> kube-proxy [90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06] <==
	I0315 23:45:57.999551       1 server_others.go:69] "Using iptables proxy"
	I0315 23:45:58.019787       1 node.go:141] Successfully retrieved node IP: 192.168.39.5
	I0315 23:45:58.084363       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:45:58.084416       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:45:58.089740       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:45:58.089829       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:45:58.089996       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:45:58.090027       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:45:58.095661       1 config.go:188] "Starting service config controller"
	I0315 23:45:58.095726       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:45:58.095771       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:45:58.095796       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:45:58.096492       1 config.go:315] "Starting node config controller"
	I0315 23:45:58.096525       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 23:45:58.196266       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:45:58.196605       1 shared_informer.go:318] Caches are synced for node config
	I0315 23:45:58.196900       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c] <==
	I0315 23:39:53.498752       1 server_others.go:69] "Using iptables proxy"
	I0315 23:39:53.519363       1 node.go:141] Successfully retrieved node IP: 192.168.39.5
	I0315 23:39:53.604885       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:39:53.604937       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:39:53.613345       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:39:53.613407       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:39:53.613562       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:39:53.613595       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:39:53.614885       1 config.go:188] "Starting service config controller"
	I0315 23:39:53.614931       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:39:53.614951       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:39:53.614954       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:39:53.615390       1 config.go:315] "Starting node config controller"
	I0315 23:39:53.615398       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 23:39:53.715164       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:39:53.715223       1 shared_informer.go:318] Caches are synced for service config
	I0315 23:39:53.715532       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2] <==
	I0315 23:45:54.862918       1 serving.go:348] Generated self-signed cert in-memory
	W0315 23:45:56.793775       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 23:45:56.794435       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 23:45:56.794494       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 23:45:56.794520       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 23:45:56.829569       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0315 23:45:56.829663       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:45:56.831418       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 23:45:56.831599       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 23:45:56.837442       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 23:45:56.837569       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 23:45:56.932412       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1] <==
	E0315 23:39:36.403029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:39:36.403037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 23:39:36.403089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 23:39:36.403143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 23:39:36.403149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 23:39:36.403589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 23:39:36.403631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 23:39:37.214063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 23:39:37.214156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 23:39:37.234740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 23:39:37.234789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 23:39:37.275918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 23:39:37.275963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 23:39:37.289024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 23:39:37.289201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 23:39:37.533055       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 23:39:37.533213       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 23:39:37.591528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 23:39:37.591645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 23:39:37.603303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 23:39:37.603353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 23:39:37.665797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:39:37.665847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0315 23:39:40.191800       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 23:44:12.778359       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.014922    3069 topology_manager.go:215] "Topology Admit Handler" podUID="2522127e-36a4-483f-8ede-5600caf9f295" podNamespace="kube-system" podName="kindnet-fbp4p"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.015043    3069 topology_manager.go:215] "Topology Admit Handler" podUID="1e8e16bd-d511-4417-ac7a-5308ad831bf5" podNamespace="kube-system" podName="storage-provisioner"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.015206    3069 topology_manager.go:215] "Topology Admit Handler" podUID="f1af3d85-99fc-4912-b3ad-82ba68669470" podNamespace="default" podName="busybox-5b5d89c9d6-92n6k"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.019997    3069 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.084412    3069 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2522127e-36a4-483f-8ede-5600caf9f295-lib-modules\") pod \"kindnet-fbp4p\" (UID: \"2522127e-36a4-483f-8ede-5600caf9f295\") " pod="kube-system/kindnet-fbp4p"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.084988    3069 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1e8e16bd-d511-4417-ac7a-5308ad831bf5-tmp\") pod \"storage-provisioner\" (UID: \"1e8e16bd-d511-4417-ac7a-5308ad831bf5\") " pod="kube-system/storage-provisioner"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.085251    3069 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2522127e-36a4-483f-8ede-5600caf9f295-cni-cfg\") pod \"kindnet-fbp4p\" (UID: \"2522127e-36a4-483f-8ede-5600caf9f295\") " pod="kube-system/kindnet-fbp4p"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.085401    3069 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2522127e-36a4-483f-8ede-5600caf9f295-xtables-lock\") pod \"kindnet-fbp4p\" (UID: \"2522127e-36a4-483f-8ede-5600caf9f295\") " pod="kube-system/kindnet-fbp4p"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.086216    3069 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3b896986-5939-4631-8c76-b5d5159a4353-xtables-lock\") pod \"kube-proxy-htvcb\" (UID: \"3b896986-5939-4631-8c76-b5d5159a4353\") " pod="kube-system/kube-proxy-htvcb"
	Mar 15 23:45:57 multinode-658614 kubelet[3069]: I0315 23:45:57.086751    3069 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3b896986-5939-4631-8c76-b5d5159a4353-lib-modules\") pod \"kube-proxy-htvcb\" (UID: \"3b896986-5939-4631-8c76-b5d5159a4353\") " pod="kube-system/kube-proxy-htvcb"
	Mar 15 23:46:02 multinode-658614 kubelet[3069]: I0315 23:46:02.784875    3069 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.107786    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf5d895b743ed129ff72231a6f50da5fc/crio-63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a: Error finding container 63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a: Status 404 returned error can't find the container with id 63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.108420    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod3b896986-5939-4631-8c76-b5d5159a4353/crio-af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba: Error finding container af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba: Status 404 returned error can't find the container with id af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.108740    3069 manager.go:1106] Failed to create existing container: /kubepods/pod2522127e-36a4-483f-8ede-5600caf9f295/crio-02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c: Error finding container 02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c: Status 404 returned error can't find the container with id 02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.108993    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podf1af3d85-99fc-4912-b3ad-82ba68669470/crio-8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5: Error finding container 8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5: Status 404 returned error can't find the container with id 8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.109262    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod09dfcf49-6c59-4977-9640-f1a4d6821864/crio-cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484: Error finding container cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484: Status 404 returned error can't find the container with id cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.109584    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod1e8e16bd-d511-4417-ac7a-5308ad831bf5/crio-257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f: Error finding container 257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f: Status 404 returned error can't find the container with id 257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.109903    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/podad72ccbcff6c402d10ba31c6081afcd8/crio-d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc: Error finding container d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc: Status 404 returned error can't find the container with id d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.110142    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod561be46583aa09d103c7726ea003d0c9/crio-ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22: Error finding container ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22: Status 404 returned error can't find the container with id ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.110399    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod650852b8f83dd8b1bddbd7c262bccccf/crio-e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94: Error finding container e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94: Status 404 returned error can't find the container with id e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94
	Mar 15 23:46:53 multinode-658614 kubelet[3069]: E0315 23:46:53.123382    3069 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:46:53 multinode-658614 kubelet[3069]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:46:53 multinode-658614 kubelet[3069]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:46:53 multinode-658614 kubelet[3069]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:46:53 multinode-658614 kubelet[3069]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 23:47:17.099447  109204 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17991-75602/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-658614 -n multinode-658614
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-658614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (309.31s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 stop
E0315 23:48:58.906229   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:49:08.403457   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-658614 stop: exit status 82 (2m0.4739602s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-658614-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-658614 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-658614 status: exit status 3 (18.854912375s)

                                                
                                                
-- stdout --
	multinode-658614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-658614-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 23:49:40.731640  109753 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	E0315 23:49:40.731678  109753 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-658614 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-658614 -n multinode-658614
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-658614 logs -n 25: (1.563828054s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614:/home/docker/cp-test_multinode-658614-m02_multinode-658614.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614 sudo cat                                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m02_multinode-658614.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03:/home/docker/cp-test_multinode-658614-m02_multinode-658614-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614-m03 sudo cat                                   | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m02_multinode-658614-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp testdata/cp-test.txt                                                | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile2872696795/001/cp-test_multinode-658614-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614:/home/docker/cp-test_multinode-658614-m03_multinode-658614.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614 sudo cat                                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m03_multinode-658614.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt                       | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m02:/home/docker/cp-test_multinode-658614-m03_multinode-658614-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n                                                                 | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | multinode-658614-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-658614 ssh -n multinode-658614-m02 sudo cat                                   | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	|         | /home/docker/cp-test_multinode-658614-m03_multinode-658614-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-658614 node stop m03                                                          | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:41 UTC |
	| node    | multinode-658614 node start                                                             | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:41 UTC | 15 Mar 24 23:42 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-658614                                                                | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:42 UTC |                     |
	| stop    | -p multinode-658614                                                                     | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:42 UTC |                     |
	| start   | -p multinode-658614                                                                     | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:44 UTC | 15 Mar 24 23:47 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-658614                                                                | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:47 UTC |                     |
	| node    | multinode-658614 node delete                                                            | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:47 UTC | 15 Mar 24 23:47 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-658614 stop                                                                   | multinode-658614 | jenkins | v1.32.0 | 15 Mar 24 23:47 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 23:44:11
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 23:44:11.844237  108344 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:44:11.844368  108344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:44:11.844376  108344 out.go:304] Setting ErrFile to fd 2...
	I0315 23:44:11.844383  108344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:44:11.844617  108344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:44:11.845183  108344 out.go:298] Setting JSON to false
	I0315 23:44:11.846116  108344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":8802,"bootTime":1710537450,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:44:11.846184  108344 start.go:139] virtualization: kvm guest
	I0315 23:44:11.848745  108344 out.go:177] * [multinode-658614] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:44:11.850497  108344 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:44:11.850476  108344 notify.go:220] Checking for updates...
	I0315 23:44:11.851794  108344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:44:11.853104  108344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:44:11.854544  108344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:44:11.855874  108344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:44:11.857246  108344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:44:11.859387  108344 config.go:182] Loaded profile config "multinode-658614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:44:11.859544  108344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:44:11.860199  108344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:44:11.860280  108344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:44:11.877159  108344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40383
	I0315 23:44:11.877539  108344 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:44:11.878077  108344 main.go:141] libmachine: Using API Version  1
	I0315 23:44:11.878100  108344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:44:11.878468  108344 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:44:11.878645  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:44:11.913466  108344 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 23:44:11.914852  108344 start.go:297] selected driver: kvm2
	I0315 23:44:11.914870  108344 start.go:901] validating driver "kvm2" against &{Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingre
ss-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOp
timizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:44:11.915047  108344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:44:11.915452  108344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:44:11.915537  108344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:44:11.930661  108344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:44:11.931782  108344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0315 23:44:11.931903  108344 cni.go:84] Creating CNI manager for ""
	I0315 23:44:11.931926  108344 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 23:44:11.932043  108344 start.go:340] cluster config:
	{Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-658614 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false ko
ng:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:44:11.932334  108344 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:44:11.935256  108344 out.go:177] * Starting "multinode-658614" primary control-plane node in "multinode-658614" cluster
	I0315 23:44:11.936587  108344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:44:11.936638  108344 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0315 23:44:11.936652  108344 cache.go:56] Caching tarball of preloaded images
	I0315 23:44:11.936726  108344 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:44:11.936741  108344 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0315 23:44:11.936867  108344 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/config.json ...
	I0315 23:44:11.937077  108344 start.go:360] acquireMachinesLock for multinode-658614: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:44:11.937124  108344 start.go:364] duration metric: took 27.635µs to acquireMachinesLock for "multinode-658614"
	I0315 23:44:11.937144  108344 start.go:96] Skipping create...Using existing machine configuration
	I0315 23:44:11.937153  108344 fix.go:54] fixHost starting: 
	I0315 23:44:11.937450  108344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:44:11.937477  108344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:44:11.952051  108344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43041
	I0315 23:44:11.952528  108344 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:44:11.953022  108344 main.go:141] libmachine: Using API Version  1
	I0315 23:44:11.953046  108344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:44:11.953365  108344 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:44:11.953534  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:44:11.953691  108344 main.go:141] libmachine: (multinode-658614) Calling .GetState
	I0315 23:44:11.955421  108344 fix.go:112] recreateIfNeeded on multinode-658614: state=Running err=<nil>
	W0315 23:44:11.955441  108344 fix.go:138] unexpected machine state, will restart: <nil>
	I0315 23:44:11.958537  108344 out.go:177] * Updating the running kvm2 "multinode-658614" VM ...
	I0315 23:44:11.959824  108344 machine.go:94] provisionDockerMachine start ...
	I0315 23:44:11.959849  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:44:11.960054  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:11.962482  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:11.962941  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:11.962967  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:11.963108  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:11.963275  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:11.963434  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:11.963570  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:11.963722  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:11.963895  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:11.963905  108344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0315 23:44:12.085675  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-658614
	
	I0315 23:44:12.085701  108344 main.go:141] libmachine: (multinode-658614) Calling .GetMachineName
	I0315 23:44:12.085945  108344 buildroot.go:166] provisioning hostname "multinode-658614"
	I0315 23:44:12.085971  108344 main.go:141] libmachine: (multinode-658614) Calling .GetMachineName
	I0315 23:44:12.086139  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.089166  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.089532  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.089561  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.089740  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.089914  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.090138  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.090294  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.090475  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:12.090670  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:12.090695  108344 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-658614 && echo "multinode-658614" | sudo tee /etc/hostname
	I0315 23:44:12.224949  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-658614
	
	I0315 23:44:12.224986  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.227810  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.228189  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.228235  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.228359  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.228569  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.228714  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.228862  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.229027  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:12.229232  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:12.229249  108344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-658614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-658614/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-658614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0315 23:44:12.349223  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0315 23:44:12.349256  108344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0315 23:44:12.349283  108344 buildroot.go:174] setting up certificates
	I0315 23:44:12.349297  108344 provision.go:84] configureAuth start
	I0315 23:44:12.349314  108344 main.go:141] libmachine: (multinode-658614) Calling .GetMachineName
	I0315 23:44:12.349605  108344 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:44:12.352433  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.352762  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.352793  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.352926  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.355062  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.355386  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.355415  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.355683  108344 provision.go:143] copyHostCerts
	I0315 23:44:12.355730  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:44:12.355766  108344 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0315 23:44:12.355776  108344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0315 23:44:12.355841  108344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0315 23:44:12.355926  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:44:12.355948  108344 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0315 23:44:12.355957  108344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0315 23:44:12.355995  108344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0315 23:44:12.356075  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:44:12.356101  108344 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0315 23:44:12.356111  108344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0315 23:44:12.356143  108344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0315 23:44:12.356198  108344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.multinode-658614 san=[127.0.0.1 192.168.39.5 localhost minikube multinode-658614]
	I0315 23:44:12.448319  108344 provision.go:177] copyRemoteCerts
	I0315 23:44:12.448408  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0315 23:44:12.448440  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.451093  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.451483  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.451520  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.451717  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.451923  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.452137  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.452291  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:44:12.541199  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0315 23:44:12.541285  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0315 23:44:12.573311  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0315 23:44:12.573390  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0315 23:44:12.605134  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0315 23:44:12.605202  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0315 23:44:12.632715  108344 provision.go:87] duration metric: took 283.401081ms to configureAuth
	I0315 23:44:12.632751  108344 buildroot.go:189] setting minikube options for container-runtime
	I0315 23:44:12.632990  108344 config.go:182] Loaded profile config "multinode-658614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:44:12.633077  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:44:12.635557  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.635850  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:44:12.635879  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:44:12.636042  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:44:12.636243  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.636392  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:44:12.636544  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:44:12.636716  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:44:12.636920  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:44:12.636937  108344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0315 23:45:43.525961  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0315 23:45:43.525996  108344 machine.go:97] duration metric: took 1m31.566153193s to provisionDockerMachine
	I0315 23:45:43.526011  108344 start.go:293] postStartSetup for "multinode-658614" (driver="kvm2")
	I0315 23:45:43.526023  108344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0315 23:45:43.526047  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.526427  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0315 23:45:43.526472  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.529825  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.530289  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.530310  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.530483  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.530681  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.530841  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.530980  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:45:43.620071  108344 ssh_runner.go:195] Run: cat /etc/os-release
	I0315 23:45:43.624213  108344 command_runner.go:130] > NAME=Buildroot
	I0315 23:45:43.624230  108344 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0315 23:45:43.624234  108344 command_runner.go:130] > ID=buildroot
	I0315 23:45:43.624239  108344 command_runner.go:130] > VERSION_ID=2023.02.9
	I0315 23:45:43.624245  108344 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0315 23:45:43.624402  108344 info.go:137] Remote host: Buildroot 2023.02.9
	I0315 23:45:43.624423  108344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0315 23:45:43.624500  108344 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0315 23:45:43.624605  108344 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0315 23:45:43.624619  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /etc/ssl/certs/828702.pem
	I0315 23:45:43.624703  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0315 23:45:43.634722  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:45:43.660452  108344 start.go:296] duration metric: took 134.424394ms for postStartSetup
	I0315 23:45:43.660495  108344 fix.go:56] duration metric: took 1m31.723341545s for fixHost
	I0315 23:45:43.660520  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.663251  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.663645  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.663676  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.663813  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.664028  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.664217  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.664340  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.664534  108344 main.go:141] libmachine: Using SSH client type: native
	I0315 23:45:43.664700  108344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.5 22 <nil> <nil>}
	I0315 23:45:43.664711  108344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0315 23:45:43.780435  108344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710546343.752985565
	
	I0315 23:45:43.780465  108344 fix.go:216] guest clock: 1710546343.752985565
	I0315 23:45:43.780475  108344 fix.go:229] Guest: 2024-03-15 23:45:43.752985565 +0000 UTC Remote: 2024-03-15 23:45:43.660500222 +0000 UTC m=+91.866010704 (delta=92.485343ms)
	I0315 23:45:43.780502  108344 fix.go:200] guest clock delta is within tolerance: 92.485343ms
	I0315 23:45:43.780508  108344 start.go:83] releasing machines lock for "multinode-658614", held for 1m31.843371453s
	I0315 23:45:43.780529  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.780786  108344 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:45:43.783656  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.784058  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.784113  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.784232  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.784750  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.784967  108344 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:45:43.785062  108344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0315 23:45:43.785117  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.785190  108344 ssh_runner.go:195] Run: cat /version.json
	I0315 23:45:43.785213  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:45:43.787678  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.787896  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.788040  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.788066  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.788227  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:43.788242  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.788254  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:43.788383  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:45:43.788461  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.788592  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:45:43.788594  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.788755  108344 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:45:43.788763  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:45:43.788875  108344 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:45:43.868092  108344 command_runner.go:130] > {"iso_version": "v1.32.1-1710520390-17991", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "3dd306d082737a9ddf335108b42c9fcb2ad84298"}
	I0315 23:45:43.868454  108344 ssh_runner.go:195] Run: systemctl --version
	I0315 23:45:43.893211  108344 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0315 23:45:43.893254  108344 command_runner.go:130] > systemd 252 (252)
	I0315 23:45:43.893273  108344 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0315 23:45:43.893323  108344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0315 23:45:44.055930  108344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0315 23:45:44.072544  108344 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0315 23:45:44.072652  108344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0315 23:45:44.072723  108344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0315 23:45:44.082252  108344 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0315 23:45:44.082279  108344 start.go:494] detecting cgroup driver to use...
	I0315 23:45:44.082345  108344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0315 23:45:44.099124  108344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0315 23:45:44.113500  108344 docker.go:217] disabling cri-docker service (if available) ...
	I0315 23:45:44.113579  108344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0315 23:45:44.127840  108344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0315 23:45:44.141627  108344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0315 23:45:44.289795  108344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0315 23:45:44.430265  108344 docker.go:233] disabling docker service ...
	I0315 23:45:44.430342  108344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0315 23:45:44.447126  108344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0315 23:45:44.461276  108344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0315 23:45:44.602410  108344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0315 23:45:44.747722  108344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0315 23:45:44.761943  108344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0315 23:45:44.781948  108344 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0315 23:45:44.782518  108344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0315 23:45:44.782571  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.793215  108344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0315 23:45:44.793276  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.803469  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.813764  108344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0315 23:45:44.824092  108344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0315 23:45:44.834450  108344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0315 23:45:44.843611  108344 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0315 23:45:44.843662  108344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0315 23:45:44.854110  108344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:45:45.004259  108344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0315 23:45:50.574154  108344 ssh_runner.go:235] Completed: sudo systemctl restart crio: (5.569842268s)
	I0315 23:45:50.574191  108344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0315 23:45:50.574251  108344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0315 23:45:50.579391  108344 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0315 23:45:50.579427  108344 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0315 23:45:50.579437  108344 command_runner.go:130] > Device: 0,22	Inode: 1336        Links: 1
	I0315 23:45:50.579447  108344 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 23:45:50.579454  108344 command_runner.go:130] > Access: 2024-03-15 23:45:50.430659656 +0000
	I0315 23:45:50.579463  108344 command_runner.go:130] > Modify: 2024-03-15 23:45:50.430659656 +0000
	I0315 23:45:50.579473  108344 command_runner.go:130] > Change: 2024-03-15 23:45:50.430659656 +0000
	I0315 23:45:50.579478  108344 command_runner.go:130] >  Birth: -
	I0315 23:45:50.579571  108344 start.go:562] Will wait 60s for crictl version
	I0315 23:45:50.579622  108344 ssh_runner.go:195] Run: which crictl
	I0315 23:45:50.583233  108344 command_runner.go:130] > /usr/bin/crictl
	I0315 23:45:50.583417  108344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0315 23:45:50.620256  108344 command_runner.go:130] > Version:  0.1.0
	I0315 23:45:50.620290  108344 command_runner.go:130] > RuntimeName:  cri-o
	I0315 23:45:50.620297  108344 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0315 23:45:50.620305  108344 command_runner.go:130] > RuntimeApiVersion:  v1
	I0315 23:45:50.621581  108344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0315 23:45:50.621664  108344 ssh_runner.go:195] Run: crio --version
	I0315 23:45:50.651948  108344 command_runner.go:130] > crio version 1.29.1
	I0315 23:45:50.651971  108344 command_runner.go:130] > Version:        1.29.1
	I0315 23:45:50.651978  108344 command_runner.go:130] > GitCommit:      unknown
	I0315 23:45:50.651985  108344 command_runner.go:130] > GitCommitDate:  unknown
	I0315 23:45:50.651992  108344 command_runner.go:130] > GitTreeState:   clean
	I0315 23:45:50.652001  108344 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0315 23:45:50.652007  108344 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 23:45:50.652015  108344 command_runner.go:130] > Compiler:       gc
	I0315 23:45:50.652023  108344 command_runner.go:130] > Platform:       linux/amd64
	I0315 23:45:50.652035  108344 command_runner.go:130] > Linkmode:       dynamic
	I0315 23:45:50.652042  108344 command_runner.go:130] > BuildTags:      
	I0315 23:45:50.652048  108344 command_runner.go:130] >   containers_image_ostree_stub
	I0315 23:45:50.652054  108344 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 23:45:50.652059  108344 command_runner.go:130] >   btrfs_noversion
	I0315 23:45:50.652065  108344 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 23:45:50.652076  108344 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 23:45:50.652081  108344 command_runner.go:130] >   seccomp
	I0315 23:45:50.652086  108344 command_runner.go:130] > LDFlags:          unknown
	I0315 23:45:50.652093  108344 command_runner.go:130] > SeccompEnabled:   true
	I0315 23:45:50.652100  108344 command_runner.go:130] > AppArmorEnabled:  false
	I0315 23:45:50.652177  108344 ssh_runner.go:195] Run: crio --version
	I0315 23:45:50.680856  108344 command_runner.go:130] > crio version 1.29.1
	I0315 23:45:50.680879  108344 command_runner.go:130] > Version:        1.29.1
	I0315 23:45:50.680885  108344 command_runner.go:130] > GitCommit:      unknown
	I0315 23:45:50.680896  108344 command_runner.go:130] > GitCommitDate:  unknown
	I0315 23:45:50.680900  108344 command_runner.go:130] > GitTreeState:   clean
	I0315 23:45:50.680905  108344 command_runner.go:130] > BuildDate:      2024-03-15T21:54:37Z
	I0315 23:45:50.680910  108344 command_runner.go:130] > GoVersion:      go1.21.6
	I0315 23:45:50.680913  108344 command_runner.go:130] > Compiler:       gc
	I0315 23:45:50.680918  108344 command_runner.go:130] > Platform:       linux/amd64
	I0315 23:45:50.680921  108344 command_runner.go:130] > Linkmode:       dynamic
	I0315 23:45:50.680927  108344 command_runner.go:130] > BuildTags:      
	I0315 23:45:50.680931  108344 command_runner.go:130] >   containers_image_ostree_stub
	I0315 23:45:50.680936  108344 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0315 23:45:50.680940  108344 command_runner.go:130] >   btrfs_noversion
	I0315 23:45:50.680944  108344 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0315 23:45:50.680950  108344 command_runner.go:130] >   libdm_no_deferred_remove
	I0315 23:45:50.680955  108344 command_runner.go:130] >   seccomp
	I0315 23:45:50.680959  108344 command_runner.go:130] > LDFlags:          unknown
	I0315 23:45:50.680963  108344 command_runner.go:130] > SeccompEnabled:   true
	I0315 23:45:50.680968  108344 command_runner.go:130] > AppArmorEnabled:  false
	I0315 23:45:50.684271  108344 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0315 23:45:50.685746  108344 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:45:50.688406  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:50.688680  108344 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:45:50.688716  108344 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:45:50.688912  108344 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0315 23:45:50.693409  108344 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0315 23:45:50.693562  108344 kubeadm.go:877] updating cluster {Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations
:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0315 23:45:50.693750  108344 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0315 23:45:50.693813  108344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:45:50.751426  108344 command_runner.go:130] > {
	I0315 23:45:50.751453  108344 command_runner.go:130] >   "images": [
	I0315 23:45:50.751458  108344 command_runner.go:130] >     {
	I0315 23:45:50.751470  108344 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 23:45:50.751475  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751481  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 23:45:50.751485  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751489  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751497  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 23:45:50.751504  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 23:45:50.751510  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751514  108344 command_runner.go:130] >       "size": "65258016",
	I0315 23:45:50.751521  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751526  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751536  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751542  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751546  108344 command_runner.go:130] >     },
	I0315 23:45:50.751552  108344 command_runner.go:130] >     {
	I0315 23:45:50.751561  108344 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 23:45:50.751574  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751587  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 23:45:50.751591  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751607  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751615  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 23:45:50.751622  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 23:45:50.751629  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751633  108344 command_runner.go:130] >       "size": "65291810",
	I0315 23:45:50.751636  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751644  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751650  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751653  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751659  108344 command_runner.go:130] >     },
	I0315 23:45:50.751663  108344 command_runner.go:130] >     {
	I0315 23:45:50.751670  108344 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 23:45:50.751676  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751682  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 23:45:50.751688  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751692  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751700  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 23:45:50.751708  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 23:45:50.751714  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751718  108344 command_runner.go:130] >       "size": "1363676",
	I0315 23:45:50.751724  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751728  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751734  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751739  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751744  108344 command_runner.go:130] >     },
	I0315 23:45:50.751748  108344 command_runner.go:130] >     {
	I0315 23:45:50.751756  108344 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 23:45:50.751760  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751765  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 23:45:50.751771  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751775  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751785  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 23:45:50.751800  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 23:45:50.751806  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751811  108344 command_runner.go:130] >       "size": "31470524",
	I0315 23:45:50.751817  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751826  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751832  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751836  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751841  108344 command_runner.go:130] >     },
	I0315 23:45:50.751845  108344 command_runner.go:130] >     {
	I0315 23:45:50.751853  108344 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 23:45:50.751859  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751865  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 23:45:50.751870  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751874  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751883  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 23:45:50.751892  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 23:45:50.751896  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751900  108344 command_runner.go:130] >       "size": "53621675",
	I0315 23:45:50.751903  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.751907  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.751913  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.751917  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.751921  108344 command_runner.go:130] >     },
	I0315 23:45:50.751924  108344 command_runner.go:130] >     {
	I0315 23:45:50.751932  108344 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 23:45:50.751936  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.751941  108344 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 23:45:50.751947  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751951  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.751959  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 23:45:50.751968  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 23:45:50.751974  108344 command_runner.go:130] >       ],
	I0315 23:45:50.751978  108344 command_runner.go:130] >       "size": "295456551",
	I0315 23:45:50.751983  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.751987  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.751993  108344 command_runner.go:130] >       },
	I0315 23:45:50.751997  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752003  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752007  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752012  108344 command_runner.go:130] >     },
	I0315 23:45:50.752023  108344 command_runner.go:130] >     {
	I0315 23:45:50.752031  108344 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 23:45:50.752035  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752042  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 23:45:50.752046  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752053  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752060  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 23:45:50.752069  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 23:45:50.752074  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752078  108344 command_runner.go:130] >       "size": "127226832",
	I0315 23:45:50.752084  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752093  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.752100  108344 command_runner.go:130] >       },
	I0315 23:45:50.752103  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752109  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752113  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752117  108344 command_runner.go:130] >     },
	I0315 23:45:50.752120  108344 command_runner.go:130] >     {
	I0315 23:45:50.752129  108344 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 23:45:50.752133  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752139  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 23:45:50.752144  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752149  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752170  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 23:45:50.752180  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 23:45:50.752186  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752190  108344 command_runner.go:130] >       "size": "123261750",
	I0315 23:45:50.752196  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752200  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.752205  108344 command_runner.go:130] >       },
	I0315 23:45:50.752209  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752215  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752219  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752225  108344 command_runner.go:130] >     },
	I0315 23:45:50.752229  108344 command_runner.go:130] >     {
	I0315 23:45:50.752237  108344 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 23:45:50.752246  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752254  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 23:45:50.752258  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752262  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752271  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 23:45:50.752277  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 23:45:50.752281  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752284  108344 command_runner.go:130] >       "size": "74749335",
	I0315 23:45:50.752288  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.752291  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752295  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752298  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752301  108344 command_runner.go:130] >     },
	I0315 23:45:50.752307  108344 command_runner.go:130] >     {
	I0315 23:45:50.752313  108344 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 23:45:50.752319  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752324  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 23:45:50.752330  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752334  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752343  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 23:45:50.752352  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 23:45:50.752358  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752362  108344 command_runner.go:130] >       "size": "61551410",
	I0315 23:45:50.752365  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752371  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.752374  108344 command_runner.go:130] >       },
	I0315 23:45:50.752380  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752384  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752388  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.752391  108344 command_runner.go:130] >     },
	I0315 23:45:50.752394  108344 command_runner.go:130] >     {
	I0315 23:45:50.752403  108344 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 23:45:50.752407  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.752413  108344 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 23:45:50.752417  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752421  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.752435  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 23:45:50.752445  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 23:45:50.752448  108344 command_runner.go:130] >       ],
	I0315 23:45:50.752452  108344 command_runner.go:130] >       "size": "750414",
	I0315 23:45:50.752456  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.752460  108344 command_runner.go:130] >         "value": "65535"
	I0315 23:45:50.752465  108344 command_runner.go:130] >       },
	I0315 23:45:50.752469  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.752475  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.752479  108344 command_runner.go:130] >       "pinned": true
	I0315 23:45:50.752482  108344 command_runner.go:130] >     }
	I0315 23:45:50.752486  108344 command_runner.go:130] >   ]
	I0315 23:45:50.752491  108344 command_runner.go:130] > }
	I0315 23:45:50.752795  108344 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:45:50.752812  108344 crio.go:415] Images already preloaded, skipping extraction
	I0315 23:45:50.752863  108344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0315 23:45:50.792433  108344 command_runner.go:130] > {
	I0315 23:45:50.792459  108344 command_runner.go:130] >   "images": [
	I0315 23:45:50.792463  108344 command_runner.go:130] >     {
	I0315 23:45:50.792471  108344 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0315 23:45:50.792476  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792481  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0315 23:45:50.792485  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792488  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792497  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0315 23:45:50.792503  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0315 23:45:50.792508  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792513  108344 command_runner.go:130] >       "size": "65258016",
	I0315 23:45:50.792516  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792520  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792531  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792541  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792544  108344 command_runner.go:130] >     },
	I0315 23:45:50.792548  108344 command_runner.go:130] >     {
	I0315 23:45:50.792554  108344 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0315 23:45:50.792560  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792565  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0315 23:45:50.792569  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792573  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792583  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0315 23:45:50.792590  108344 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0315 23:45:50.792596  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792599  108344 command_runner.go:130] >       "size": "65291810",
	I0315 23:45:50.792606  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792613  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792617  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792621  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792630  108344 command_runner.go:130] >     },
	I0315 23:45:50.792634  108344 command_runner.go:130] >     {
	I0315 23:45:50.792640  108344 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0315 23:45:50.792645  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792651  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0315 23:45:50.792657  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792661  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792668  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0315 23:45:50.792677  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0315 23:45:50.792683  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792687  108344 command_runner.go:130] >       "size": "1363676",
	I0315 23:45:50.792693  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792697  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792715  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792721  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792724  108344 command_runner.go:130] >     },
	I0315 23:45:50.792730  108344 command_runner.go:130] >     {
	I0315 23:45:50.792736  108344 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0315 23:45:50.792742  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792749  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0315 23:45:50.792755  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792759  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792768  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0315 23:45:50.792782  108344 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0315 23:45:50.792788  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792793  108344 command_runner.go:130] >       "size": "31470524",
	I0315 23:45:50.792799  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792803  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792809  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792813  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792818  108344 command_runner.go:130] >     },
	I0315 23:45:50.792822  108344 command_runner.go:130] >     {
	I0315 23:45:50.792829  108344 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0315 23:45:50.792834  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792839  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0315 23:45:50.792845  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792849  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792856  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0315 23:45:50.792866  108344 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0315 23:45:50.792871  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792876  108344 command_runner.go:130] >       "size": "53621675",
	I0315 23:45:50.792881  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.792886  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792892  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.792896  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.792902  108344 command_runner.go:130] >     },
	I0315 23:45:50.792908  108344 command_runner.go:130] >     {
	I0315 23:45:50.792916  108344 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0315 23:45:50.792920  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.792925  108344 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0315 23:45:50.792930  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792935  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.792944  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0315 23:45:50.792954  108344 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0315 23:45:50.792961  108344 command_runner.go:130] >       ],
	I0315 23:45:50.792965  108344 command_runner.go:130] >       "size": "295456551",
	I0315 23:45:50.792971  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.792975  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.792984  108344 command_runner.go:130] >       },
	I0315 23:45:50.792990  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.792994  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793000  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793004  108344 command_runner.go:130] >     },
	I0315 23:45:50.793007  108344 command_runner.go:130] >     {
	I0315 23:45:50.793013  108344 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0315 23:45:50.793019  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793024  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0315 23:45:50.793030  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793034  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793043  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0315 23:45:50.793053  108344 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0315 23:45:50.793056  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793063  108344 command_runner.go:130] >       "size": "127226832",
	I0315 23:45:50.793067  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793074  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.793077  108344 command_runner.go:130] >       },
	I0315 23:45:50.793084  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793088  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793094  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793103  108344 command_runner.go:130] >     },
	I0315 23:45:50.793109  108344 command_runner.go:130] >     {
	I0315 23:45:50.793115  108344 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0315 23:45:50.793121  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793127  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0315 23:45:50.793132  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793137  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793155  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0315 23:45:50.793165  108344 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0315 23:45:50.793171  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793176  108344 command_runner.go:130] >       "size": "123261750",
	I0315 23:45:50.793182  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793187  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.793193  108344 command_runner.go:130] >       },
	I0315 23:45:50.793196  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793202  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793206  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793212  108344 command_runner.go:130] >     },
	I0315 23:45:50.793216  108344 command_runner.go:130] >     {
	I0315 23:45:50.793224  108344 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0315 23:45:50.793231  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793235  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0315 23:45:50.793242  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793246  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793255  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0315 23:45:50.793264  108344 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0315 23:45:50.793272  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793277  108344 command_runner.go:130] >       "size": "74749335",
	I0315 23:45:50.793281  108344 command_runner.go:130] >       "uid": null,
	I0315 23:45:50.793287  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793291  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793297  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793301  108344 command_runner.go:130] >     },
	I0315 23:45:50.793306  108344 command_runner.go:130] >     {
	I0315 23:45:50.793312  108344 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0315 23:45:50.793318  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793323  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0315 23:45:50.793328  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793333  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793342  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0315 23:45:50.793351  108344 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0315 23:45:50.793357  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793361  108344 command_runner.go:130] >       "size": "61551410",
	I0315 23:45:50.793366  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793370  108344 command_runner.go:130] >         "value": "0"
	I0315 23:45:50.793376  108344 command_runner.go:130] >       },
	I0315 23:45:50.793380  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793386  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793391  108344 command_runner.go:130] >       "pinned": false
	I0315 23:45:50.793396  108344 command_runner.go:130] >     },
	I0315 23:45:50.793400  108344 command_runner.go:130] >     {
	I0315 23:45:50.793409  108344 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0315 23:45:50.793413  108344 command_runner.go:130] >       "repoTags": [
	I0315 23:45:50.793418  108344 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0315 23:45:50.793421  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793428  108344 command_runner.go:130] >       "repoDigests": [
	I0315 23:45:50.793434  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0315 23:45:50.793442  108344 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0315 23:45:50.793448  108344 command_runner.go:130] >       ],
	I0315 23:45:50.793452  108344 command_runner.go:130] >       "size": "750414",
	I0315 23:45:50.793458  108344 command_runner.go:130] >       "uid": {
	I0315 23:45:50.793462  108344 command_runner.go:130] >         "value": "65535"
	I0315 23:45:50.793467  108344 command_runner.go:130] >       },
	I0315 23:45:50.793471  108344 command_runner.go:130] >       "username": "",
	I0315 23:45:50.793478  108344 command_runner.go:130] >       "spec": null,
	I0315 23:45:50.793482  108344 command_runner.go:130] >       "pinned": true
	I0315 23:45:50.793488  108344 command_runner.go:130] >     }
	I0315 23:45:50.793491  108344 command_runner.go:130] >   ]
	I0315 23:45:50.793497  108344 command_runner.go:130] > }
	I0315 23:45:50.793978  108344 crio.go:496] all images are preloaded for cri-o runtime.
	I0315 23:45:50.793998  108344 cache_images.go:84] Images are preloaded, skipping loading
	I0315 23:45:50.794006  108344 kubeadm.go:928] updating node { 192.168.39.5 8443 v1.28.4 crio true true} ...
	I0315 23:45:50.794110  108344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-658614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0315 23:45:50.794175  108344 ssh_runner.go:195] Run: crio config
	I0315 23:45:50.836222  108344 command_runner.go:130] ! time="2024-03-15 23:45:50.808508453Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0315 23:45:50.844252  108344 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0315 23:45:50.850007  108344 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0315 23:45:50.850027  108344 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0315 23:45:50.850033  108344 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0315 23:45:50.850037  108344 command_runner.go:130] > #
	I0315 23:45:50.850043  108344 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0315 23:45:50.850049  108344 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0315 23:45:50.850056  108344 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0315 23:45:50.850064  108344 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0315 23:45:50.850073  108344 command_runner.go:130] > # reload'.
	I0315 23:45:50.850079  108344 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0315 23:45:50.850085  108344 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0315 23:45:50.850091  108344 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0315 23:45:50.850105  108344 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0315 23:45:50.850111  108344 command_runner.go:130] > [crio]
	I0315 23:45:50.850120  108344 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0315 23:45:50.850130  108344 command_runner.go:130] > # containers images, in this directory.
	I0315 23:45:50.850137  108344 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0315 23:45:50.850151  108344 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0315 23:45:50.850160  108344 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0315 23:45:50.850175  108344 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0315 23:45:50.850185  108344 command_runner.go:130] > # imagestore = ""
	I0315 23:45:50.850195  108344 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0315 23:45:50.850205  108344 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0315 23:45:50.850210  108344 command_runner.go:130] > storage_driver = "overlay"
	I0315 23:45:50.850219  108344 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0315 23:45:50.850225  108344 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0315 23:45:50.850237  108344 command_runner.go:130] > storage_option = [
	I0315 23:45:50.850244  108344 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0315 23:45:50.850247  108344 command_runner.go:130] > ]
	I0315 23:45:50.850258  108344 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0315 23:45:50.850266  108344 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0315 23:45:50.850273  108344 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0315 23:45:50.850278  108344 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0315 23:45:50.850286  108344 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0315 23:45:50.850291  108344 command_runner.go:130] > # always happen on a node reboot
	I0315 23:45:50.850298  108344 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0315 23:45:50.850321  108344 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0315 23:45:50.850330  108344 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0315 23:45:50.850335  108344 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0315 23:45:50.850340  108344 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0315 23:45:50.850347  108344 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0315 23:45:50.850357  108344 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0315 23:45:50.850363  108344 command_runner.go:130] > # internal_wipe = true
	I0315 23:45:50.850371  108344 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0315 23:45:50.850378  108344 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0315 23:45:50.850383  108344 command_runner.go:130] > # internal_repair = false
	I0315 23:45:50.850390  108344 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0315 23:45:50.850396  108344 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0315 23:45:50.850404  108344 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0315 23:45:50.850413  108344 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0315 23:45:50.850422  108344 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0315 23:45:50.850428  108344 command_runner.go:130] > [crio.api]
	I0315 23:45:50.850434  108344 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0315 23:45:50.850440  108344 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0315 23:45:50.850445  108344 command_runner.go:130] > # IP address on which the stream server will listen.
	I0315 23:45:50.850452  108344 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0315 23:45:50.850459  108344 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0315 23:45:50.850467  108344 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0315 23:45:50.850471  108344 command_runner.go:130] > # stream_port = "0"
	I0315 23:45:50.850478  108344 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0315 23:45:50.850485  108344 command_runner.go:130] > # stream_enable_tls = false
	I0315 23:45:50.850491  108344 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0315 23:45:50.850497  108344 command_runner.go:130] > # stream_idle_timeout = ""
	I0315 23:45:50.850504  108344 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0315 23:45:50.850514  108344 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0315 23:45:50.850520  108344 command_runner.go:130] > # minutes.
	I0315 23:45:50.850524  108344 command_runner.go:130] > # stream_tls_cert = ""
	I0315 23:45:50.850531  108344 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0315 23:45:50.850540  108344 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0315 23:45:50.850546  108344 command_runner.go:130] > # stream_tls_key = ""
	I0315 23:45:50.850552  108344 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0315 23:45:50.850560  108344 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0315 23:45:50.850574  108344 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0315 23:45:50.850580  108344 command_runner.go:130] > # stream_tls_ca = ""
	I0315 23:45:50.850588  108344 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 23:45:50.850594  108344 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0315 23:45:50.850602  108344 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0315 23:45:50.850609  108344 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0315 23:45:50.850615  108344 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0315 23:45:50.850622  108344 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0315 23:45:50.850629  108344 command_runner.go:130] > [crio.runtime]
	I0315 23:45:50.850634  108344 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0315 23:45:50.850642  108344 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0315 23:45:50.850648  108344 command_runner.go:130] > # "nofile=1024:2048"
	I0315 23:45:50.850654  108344 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0315 23:45:50.850660  108344 command_runner.go:130] > # default_ulimits = [
	I0315 23:45:50.850664  108344 command_runner.go:130] > # ]
	I0315 23:45:50.850671  108344 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0315 23:45:50.850677  108344 command_runner.go:130] > # no_pivot = false
	I0315 23:45:50.850683  108344 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0315 23:45:50.850691  108344 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0315 23:45:50.850698  108344 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0315 23:45:50.850704  108344 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0315 23:45:50.850711  108344 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0315 23:45:50.850717  108344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 23:45:50.850723  108344 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0315 23:45:50.850728  108344 command_runner.go:130] > # Cgroup setting for conmon
	I0315 23:45:50.850736  108344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0315 23:45:50.850741  108344 command_runner.go:130] > conmon_cgroup = "pod"
	I0315 23:45:50.850747  108344 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0315 23:45:50.850754  108344 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0315 23:45:50.850763  108344 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0315 23:45:50.850768  108344 command_runner.go:130] > conmon_env = [
	I0315 23:45:50.850775  108344 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 23:45:50.850781  108344 command_runner.go:130] > ]
	I0315 23:45:50.850786  108344 command_runner.go:130] > # Additional environment variables to set for all the
	I0315 23:45:50.850793  108344 command_runner.go:130] > # containers. These are overridden if set in the
	I0315 23:45:50.850798  108344 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0315 23:45:50.850804  108344 command_runner.go:130] > # default_env = [
	I0315 23:45:50.850808  108344 command_runner.go:130] > # ]
	I0315 23:45:50.850814  108344 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0315 23:45:50.850824  108344 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0315 23:45:50.850829  108344 command_runner.go:130] > # selinux = false
	I0315 23:45:50.850835  108344 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0315 23:45:50.850844  108344 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0315 23:45:50.850851  108344 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0315 23:45:50.850856  108344 command_runner.go:130] > # seccomp_profile = ""
	I0315 23:45:50.850861  108344 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0315 23:45:50.850869  108344 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0315 23:45:50.850877  108344 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0315 23:45:50.850884  108344 command_runner.go:130] > # which might increase security.
	I0315 23:45:50.850889  108344 command_runner.go:130] > # This option is currently deprecated,
	I0315 23:45:50.850896  108344 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0315 23:45:50.850901  108344 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0315 23:45:50.850909  108344 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0315 23:45:50.850917  108344 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0315 23:45:50.850923  108344 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0315 23:45:50.850931  108344 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0315 23:45:50.850938  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.850943  108344 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0315 23:45:50.850950  108344 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0315 23:45:50.850954  108344 command_runner.go:130] > # the cgroup blockio controller.
	I0315 23:45:50.850960  108344 command_runner.go:130] > # blockio_config_file = ""
	I0315 23:45:50.850966  108344 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0315 23:45:50.850973  108344 command_runner.go:130] > # blockio parameters.
	I0315 23:45:50.850977  108344 command_runner.go:130] > # blockio_reload = false
	I0315 23:45:50.850986  108344 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0315 23:45:50.850992  108344 command_runner.go:130] > # irqbalance daemon.
	I0315 23:45:50.850997  108344 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0315 23:45:50.851009  108344 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0315 23:45:50.851015  108344 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0315 23:45:50.851024  108344 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0315 23:45:50.851032  108344 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0315 23:45:50.851039  108344 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0315 23:45:50.851046  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.851050  108344 command_runner.go:130] > # rdt_config_file = ""
	I0315 23:45:50.851057  108344 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0315 23:45:50.851061  108344 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0315 23:45:50.851080  108344 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0315 23:45:50.851086  108344 command_runner.go:130] > # separate_pull_cgroup = ""
	I0315 23:45:50.851092  108344 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0315 23:45:50.851105  108344 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0315 23:45:50.851111  108344 command_runner.go:130] > # will be added.
	I0315 23:45:50.851115  108344 command_runner.go:130] > # default_capabilities = [
	I0315 23:45:50.851120  108344 command_runner.go:130] > # 	"CHOWN",
	I0315 23:45:50.851124  108344 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0315 23:45:50.851130  108344 command_runner.go:130] > # 	"FSETID",
	I0315 23:45:50.851134  108344 command_runner.go:130] > # 	"FOWNER",
	I0315 23:45:50.851140  108344 command_runner.go:130] > # 	"SETGID",
	I0315 23:45:50.851143  108344 command_runner.go:130] > # 	"SETUID",
	I0315 23:45:50.851149  108344 command_runner.go:130] > # 	"SETPCAP",
	I0315 23:45:50.851153  108344 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0315 23:45:50.851160  108344 command_runner.go:130] > # 	"KILL",
	I0315 23:45:50.851163  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851170  108344 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0315 23:45:50.851179  108344 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0315 23:45:50.851185  108344 command_runner.go:130] > # add_inheritable_capabilities = false
	I0315 23:45:50.851192  108344 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0315 23:45:50.851199  108344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 23:45:50.851205  108344 command_runner.go:130] > # default_sysctls = [
	I0315 23:45:50.851208  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851213  108344 command_runner.go:130] > # List of devices on the host that a
	I0315 23:45:50.851220  108344 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0315 23:45:50.851227  108344 command_runner.go:130] > # allowed_devices = [
	I0315 23:45:50.851231  108344 command_runner.go:130] > # 	"/dev/fuse",
	I0315 23:45:50.851238  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851243  108344 command_runner.go:130] > # List of additional devices. specified as
	I0315 23:45:50.851252  108344 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0315 23:45:50.851259  108344 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0315 23:45:50.851265  108344 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0315 23:45:50.851273  108344 command_runner.go:130] > # additional_devices = [
	I0315 23:45:50.851279  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851284  108344 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0315 23:45:50.851290  108344 command_runner.go:130] > # cdi_spec_dirs = [
	I0315 23:45:50.851294  108344 command_runner.go:130] > # 	"/etc/cdi",
	I0315 23:45:50.851297  108344 command_runner.go:130] > # 	"/var/run/cdi",
	I0315 23:45:50.851302  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851308  108344 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0315 23:45:50.851327  108344 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0315 23:45:50.851337  108344 command_runner.go:130] > # Defaults to false.
	I0315 23:45:50.851345  108344 command_runner.go:130] > # device_ownership_from_security_context = false
	I0315 23:45:50.851355  108344 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0315 23:45:50.851365  108344 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0315 23:45:50.851371  108344 command_runner.go:130] > # hooks_dir = [
	I0315 23:45:50.851376  108344 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0315 23:45:50.851382  108344 command_runner.go:130] > # ]
	I0315 23:45:50.851388  108344 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0315 23:45:50.851396  108344 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0315 23:45:50.851403  108344 command_runner.go:130] > # its default mounts from the following two files:
	I0315 23:45:50.851406  108344 command_runner.go:130] > #
	I0315 23:45:50.851414  108344 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0315 23:45:50.851422  108344 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0315 23:45:50.851429  108344 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0315 23:45:50.851435  108344 command_runner.go:130] > #
	I0315 23:45:50.851440  108344 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0315 23:45:50.851449  108344 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0315 23:45:50.851457  108344 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0315 23:45:50.851463  108344 command_runner.go:130] > #      only add mounts it finds in this file.
	I0315 23:45:50.851467  108344 command_runner.go:130] > #
	I0315 23:45:50.851471  108344 command_runner.go:130] > # default_mounts_file = ""
	I0315 23:45:50.851478  108344 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0315 23:45:50.851488  108344 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0315 23:45:50.851494  108344 command_runner.go:130] > pids_limit = 1024
	I0315 23:45:50.851501  108344 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0315 23:45:50.851509  108344 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0315 23:45:50.851518  108344 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0315 23:45:50.851528  108344 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0315 23:45:50.851534  108344 command_runner.go:130] > # log_size_max = -1
	I0315 23:45:50.851541  108344 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0315 23:45:50.851550  108344 command_runner.go:130] > # log_to_journald = false
	I0315 23:45:50.851559  108344 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0315 23:45:50.851566  108344 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0315 23:45:50.851571  108344 command_runner.go:130] > # Path to directory for container attach sockets.
	I0315 23:45:50.851578  108344 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0315 23:45:50.851583  108344 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0315 23:45:50.851589  108344 command_runner.go:130] > # bind_mount_prefix = ""
	I0315 23:45:50.851595  108344 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0315 23:45:50.851600  108344 command_runner.go:130] > # read_only = false
	I0315 23:45:50.851607  108344 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0315 23:45:50.851615  108344 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0315 23:45:50.851621  108344 command_runner.go:130] > # live configuration reload.
	I0315 23:45:50.851625  108344 command_runner.go:130] > # log_level = "info"
	I0315 23:45:50.851633  108344 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0315 23:45:50.851638  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.851644  108344 command_runner.go:130] > # log_filter = ""
	I0315 23:45:50.851651  108344 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0315 23:45:50.851660  108344 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0315 23:45:50.851666  108344 command_runner.go:130] > # separated by comma.
	I0315 23:45:50.851673  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851679  108344 command_runner.go:130] > # uid_mappings = ""
	I0315 23:45:50.851685  108344 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0315 23:45:50.851694  108344 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0315 23:45:50.851701  108344 command_runner.go:130] > # separated by comma.
	I0315 23:45:50.851708  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851714  108344 command_runner.go:130] > # gid_mappings = ""
	I0315 23:45:50.851720  108344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0315 23:45:50.851729  108344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 23:45:50.851736  108344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 23:45:50.851745  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851751  108344 command_runner.go:130] > # minimum_mappable_uid = -1
	I0315 23:45:50.851757  108344 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0315 23:45:50.851765  108344 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0315 23:45:50.851773  108344 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0315 23:45:50.851781  108344 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0315 23:45:50.851787  108344 command_runner.go:130] > # minimum_mappable_gid = -1
	I0315 23:45:50.851793  108344 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0315 23:45:50.851803  108344 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0315 23:45:50.851811  108344 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0315 23:45:50.851818  108344 command_runner.go:130] > # ctr_stop_timeout = 30
	I0315 23:45:50.851823  108344 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0315 23:45:50.851831  108344 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0315 23:45:50.851838  108344 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0315 23:45:50.851843  108344 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0315 23:45:50.851849  108344 command_runner.go:130] > drop_infra_ctr = false
	I0315 23:45:50.851855  108344 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0315 23:45:50.851863  108344 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0315 23:45:50.851871  108344 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0315 23:45:50.851877  108344 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0315 23:45:50.851888  108344 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0315 23:45:50.851897  108344 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0315 23:45:50.851905  108344 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0315 23:45:50.851912  108344 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0315 23:45:50.851916  108344 command_runner.go:130] > # shared_cpuset = ""
	I0315 23:45:50.851924  108344 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0315 23:45:50.851932  108344 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0315 23:45:50.851938  108344 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0315 23:45:50.851945  108344 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0315 23:45:50.851951  108344 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0315 23:45:50.851957  108344 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0315 23:45:50.851965  108344 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0315 23:45:50.851972  108344 command_runner.go:130] > # enable_criu_support = false
	I0315 23:45:50.851978  108344 command_runner.go:130] > # Enable/disable the generation of the container,
	I0315 23:45:50.851986  108344 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0315 23:45:50.851991  108344 command_runner.go:130] > # enable_pod_events = false
	I0315 23:45:50.852000  108344 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 23:45:50.852008  108344 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0315 23:45:50.852013  108344 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0315 23:45:50.852017  108344 command_runner.go:130] > # default_runtime = "runc"
	I0315 23:45:50.852024  108344 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0315 23:45:50.852032  108344 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0315 23:45:50.852042  108344 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0315 23:45:50.852051  108344 command_runner.go:130] > # creation as a file is not desired either.
	I0315 23:45:50.852061  108344 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0315 23:45:50.852069  108344 command_runner.go:130] > # the hostname is being managed dynamically.
	I0315 23:45:50.852073  108344 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0315 23:45:50.852079  108344 command_runner.go:130] > # ]
	I0315 23:45:50.852085  108344 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0315 23:45:50.852093  108344 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0315 23:45:50.852105  108344 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0315 23:45:50.852112  108344 command_runner.go:130] > # Each entry in the table should follow the format:
	I0315 23:45:50.852117  108344 command_runner.go:130] > #
	I0315 23:45:50.852122  108344 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0315 23:45:50.852129  108344 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0315 23:45:50.852133  108344 command_runner.go:130] > # runtime_type = "oci"
	I0315 23:45:50.852157  108344 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0315 23:45:50.852165  108344 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0315 23:45:50.852169  108344 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0315 23:45:50.852176  108344 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0315 23:45:50.852180  108344 command_runner.go:130] > # monitor_env = []
	I0315 23:45:50.852186  108344 command_runner.go:130] > # privileged_without_host_devices = false
	I0315 23:45:50.852190  108344 command_runner.go:130] > # allowed_annotations = []
	I0315 23:45:50.852198  108344 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0315 23:45:50.852201  108344 command_runner.go:130] > # Where:
	I0315 23:45:50.852208  108344 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0315 23:45:50.852215  108344 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0315 23:45:50.852223  108344 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0315 23:45:50.852231  108344 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0315 23:45:50.852237  108344 command_runner.go:130] > #   in $PATH.
	I0315 23:45:50.852243  108344 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0315 23:45:50.852251  108344 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0315 23:45:50.852260  108344 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0315 23:45:50.852266  108344 command_runner.go:130] > #   state.
	I0315 23:45:50.852273  108344 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0315 23:45:50.852280  108344 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0315 23:45:50.852289  108344 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0315 23:45:50.852294  108344 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0315 23:45:50.852302  108344 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0315 23:45:50.852311  108344 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0315 23:45:50.852318  108344 command_runner.go:130] > #   The currently recognized values are:
	I0315 23:45:50.852326  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0315 23:45:50.852334  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0315 23:45:50.852342  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0315 23:45:50.852349  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0315 23:45:50.852357  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0315 23:45:50.852365  108344 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0315 23:45:50.852374  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0315 23:45:50.852382  108344 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0315 23:45:50.852388  108344 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0315 23:45:50.852396  108344 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0315 23:45:50.852401  108344 command_runner.go:130] > #   deprecated option "conmon".
	I0315 23:45:50.852407  108344 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0315 23:45:50.852414  108344 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0315 23:45:50.852420  108344 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0315 23:45:50.852428  108344 command_runner.go:130] > #   should be moved to the container's cgroup
	I0315 23:45:50.852435  108344 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0315 23:45:50.852441  108344 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0315 23:45:50.852448  108344 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0315 23:45:50.852455  108344 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0315 23:45:50.852458  108344 command_runner.go:130] > #
	I0315 23:45:50.852463  108344 command_runner.go:130] > # Using the seccomp notifier feature:
	I0315 23:45:50.852469  108344 command_runner.go:130] > #
	I0315 23:45:50.852474  108344 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0315 23:45:50.852483  108344 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0315 23:45:50.852486  108344 command_runner.go:130] > #
	I0315 23:45:50.852492  108344 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0315 23:45:50.852501  108344 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0315 23:45:50.852507  108344 command_runner.go:130] > #
	I0315 23:45:50.852513  108344 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0315 23:45:50.852519  108344 command_runner.go:130] > # feature.
	I0315 23:45:50.852522  108344 command_runner.go:130] > #
	I0315 23:45:50.852530  108344 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0315 23:45:50.852538  108344 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0315 23:45:50.852544  108344 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0315 23:45:50.852552  108344 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0315 23:45:50.852562  108344 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0315 23:45:50.852568  108344 command_runner.go:130] > #
	I0315 23:45:50.852573  108344 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0315 23:45:50.852582  108344 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0315 23:45:50.852587  108344 command_runner.go:130] > #
	I0315 23:45:50.852592  108344 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0315 23:45:50.852600  108344 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0315 23:45:50.852603  108344 command_runner.go:130] > #
	I0315 23:45:50.852612  108344 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0315 23:45:50.852620  108344 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0315 23:45:50.852626  108344 command_runner.go:130] > # limitation.
	I0315 23:45:50.852630  108344 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0315 23:45:50.852637  108344 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0315 23:45:50.852640  108344 command_runner.go:130] > runtime_type = "oci"
	I0315 23:45:50.852646  108344 command_runner.go:130] > runtime_root = "/run/runc"
	I0315 23:45:50.852650  108344 command_runner.go:130] > runtime_config_path = ""
	I0315 23:45:50.852657  108344 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0315 23:45:50.852661  108344 command_runner.go:130] > monitor_cgroup = "pod"
	I0315 23:45:50.852664  108344 command_runner.go:130] > monitor_exec_cgroup = ""
	I0315 23:45:50.852669  108344 command_runner.go:130] > monitor_env = [
	I0315 23:45:50.852675  108344 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0315 23:45:50.852680  108344 command_runner.go:130] > ]
	I0315 23:45:50.852684  108344 command_runner.go:130] > privileged_without_host_devices = false
	I0315 23:45:50.852692  108344 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0315 23:45:50.852700  108344 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0315 23:45:50.852706  108344 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0315 23:45:50.852715  108344 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0315 23:45:50.852725  108344 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0315 23:45:50.852733  108344 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0315 23:45:50.852743  108344 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0315 23:45:50.852752  108344 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0315 23:45:50.852757  108344 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0315 23:45:50.852767  108344 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0315 23:45:50.852773  108344 command_runner.go:130] > # Example:
	I0315 23:45:50.852777  108344 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0315 23:45:50.852783  108344 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0315 23:45:50.852788  108344 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0315 23:45:50.852793  108344 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0315 23:45:50.852798  108344 command_runner.go:130] > # cpuset = 0
	I0315 23:45:50.852802  108344 command_runner.go:130] > # cpushares = "0-1"
	I0315 23:45:50.852805  108344 command_runner.go:130] > # Where:
	I0315 23:45:50.852809  108344 command_runner.go:130] > # The workload name is workload-type.
	I0315 23:45:50.852815  108344 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0315 23:45:50.852820  108344 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0315 23:45:50.852825  108344 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0315 23:45:50.852832  108344 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0315 23:45:50.852838  108344 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0315 23:45:50.852843  108344 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0315 23:45:50.852848  108344 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0315 23:45:50.852852  108344 command_runner.go:130] > # Default value is set to true
	I0315 23:45:50.852856  108344 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0315 23:45:50.852861  108344 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0315 23:45:50.852866  108344 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0315 23:45:50.852870  108344 command_runner.go:130] > # Default value is set to 'false'
	I0315 23:45:50.852874  108344 command_runner.go:130] > # disable_hostport_mapping = false
	I0315 23:45:50.852880  108344 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0315 23:45:50.852883  108344 command_runner.go:130] > #
	I0315 23:45:50.852887  108344 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0315 23:45:50.852893  108344 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0315 23:45:50.852899  108344 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0315 23:45:50.852904  108344 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0315 23:45:50.852909  108344 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0315 23:45:50.852912  108344 command_runner.go:130] > [crio.image]
	I0315 23:45:50.852918  108344 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0315 23:45:50.852922  108344 command_runner.go:130] > # default_transport = "docker://"
	I0315 23:45:50.852928  108344 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0315 23:45:50.852934  108344 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0315 23:45:50.852938  108344 command_runner.go:130] > # global_auth_file = ""
	I0315 23:45:50.852942  108344 command_runner.go:130] > # The image used to instantiate infra containers.
	I0315 23:45:50.852947  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.852951  108344 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0315 23:45:50.852957  108344 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0315 23:45:50.852963  108344 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0315 23:45:50.852967  108344 command_runner.go:130] > # This option supports live configuration reload.
	I0315 23:45:50.852973  108344 command_runner.go:130] > # pause_image_auth_file = ""
	I0315 23:45:50.852981  108344 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0315 23:45:50.852987  108344 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0315 23:45:50.852995  108344 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0315 23:45:50.853001  108344 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0315 23:45:50.853007  108344 command_runner.go:130] > # pause_command = "/pause"
	I0315 23:45:50.853013  108344 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0315 23:45:50.853021  108344 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0315 23:45:50.853028  108344 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0315 23:45:50.853034  108344 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0315 23:45:50.853043  108344 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0315 23:45:50.853051  108344 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0315 23:45:50.853057  108344 command_runner.go:130] > # pinned_images = [
	I0315 23:45:50.853060  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853068  108344 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0315 23:45:50.853077  108344 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0315 23:45:50.853083  108344 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0315 23:45:50.853091  108344 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0315 23:45:50.853101  108344 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0315 23:45:50.853107  108344 command_runner.go:130] > # signature_policy = ""
	I0315 23:45:50.853113  108344 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0315 23:45:50.853121  108344 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0315 23:45:50.853129  108344 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0315 23:45:50.853136  108344 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0315 23:45:50.853144  108344 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0315 23:45:50.853150  108344 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0315 23:45:50.853158  108344 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0315 23:45:50.853165  108344 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0315 23:45:50.853171  108344 command_runner.go:130] > # changing them here.
	I0315 23:45:50.853175  108344 command_runner.go:130] > # insecure_registries = [
	I0315 23:45:50.853181  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853187  108344 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0315 23:45:50.853194  108344 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0315 23:45:50.853198  108344 command_runner.go:130] > # image_volumes = "mkdir"
	I0315 23:45:50.853206  108344 command_runner.go:130] > # Temporary directory to use for storing big files
	I0315 23:45:50.853212  108344 command_runner.go:130] > # big_files_temporary_dir = ""
	I0315 23:45:50.853218  108344 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0315 23:45:50.853226  108344 command_runner.go:130] > # CNI plugins.
	I0315 23:45:50.853232  108344 command_runner.go:130] > [crio.network]
	I0315 23:45:50.853238  108344 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0315 23:45:50.853246  108344 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0315 23:45:50.853250  108344 command_runner.go:130] > # cni_default_network = ""
	I0315 23:45:50.853258  108344 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0315 23:45:50.853265  108344 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0315 23:45:50.853270  108344 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0315 23:45:50.853276  108344 command_runner.go:130] > # plugin_dirs = [
	I0315 23:45:50.853280  108344 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0315 23:45:50.853285  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853291  108344 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0315 23:45:50.853297  108344 command_runner.go:130] > [crio.metrics]
	I0315 23:45:50.853302  108344 command_runner.go:130] > # Globally enable or disable metrics support.
	I0315 23:45:50.853308  108344 command_runner.go:130] > enable_metrics = true
	I0315 23:45:50.853313  108344 command_runner.go:130] > # Specify enabled metrics collectors.
	I0315 23:45:50.853321  108344 command_runner.go:130] > # Per default all metrics are enabled.
	I0315 23:45:50.853329  108344 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0315 23:45:50.853335  108344 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0315 23:45:50.853344  108344 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0315 23:45:50.853348  108344 command_runner.go:130] > # metrics_collectors = [
	I0315 23:45:50.853352  108344 command_runner.go:130] > # 	"operations",
	I0315 23:45:50.853359  108344 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0315 23:45:50.853364  108344 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0315 23:45:50.853371  108344 command_runner.go:130] > # 	"operations_errors",
	I0315 23:45:50.853375  108344 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0315 23:45:50.853381  108344 command_runner.go:130] > # 	"image_pulls_by_name",
	I0315 23:45:50.853385  108344 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0315 23:45:50.853392  108344 command_runner.go:130] > # 	"image_pulls_failures",
	I0315 23:45:50.853396  108344 command_runner.go:130] > # 	"image_pulls_successes",
	I0315 23:45:50.853402  108344 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0315 23:45:50.853407  108344 command_runner.go:130] > # 	"image_layer_reuse",
	I0315 23:45:50.853413  108344 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0315 23:45:50.853417  108344 command_runner.go:130] > # 	"containers_oom_total",
	I0315 23:45:50.853421  108344 command_runner.go:130] > # 	"containers_oom",
	I0315 23:45:50.853427  108344 command_runner.go:130] > # 	"processes_defunct",
	I0315 23:45:50.853431  108344 command_runner.go:130] > # 	"operations_total",
	I0315 23:45:50.853437  108344 command_runner.go:130] > # 	"operations_latency_seconds",
	I0315 23:45:50.853442  108344 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0315 23:45:50.853448  108344 command_runner.go:130] > # 	"operations_errors_total",
	I0315 23:45:50.853452  108344 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0315 23:45:50.853458  108344 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0315 23:45:50.853462  108344 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0315 23:45:50.853468  108344 command_runner.go:130] > # 	"image_pulls_success_total",
	I0315 23:45:50.853472  108344 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0315 23:45:50.853478  108344 command_runner.go:130] > # 	"containers_oom_count_total",
	I0315 23:45:50.853486  108344 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0315 23:45:50.853492  108344 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0315 23:45:50.853496  108344 command_runner.go:130] > # ]
	I0315 23:45:50.853500  108344 command_runner.go:130] > # The port on which the metrics server will listen.
	I0315 23:45:50.853506  108344 command_runner.go:130] > # metrics_port = 9090
	I0315 23:45:50.853512  108344 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0315 23:45:50.853517  108344 command_runner.go:130] > # metrics_socket = ""
	I0315 23:45:50.853522  108344 command_runner.go:130] > # The certificate for the secure metrics server.
	I0315 23:45:50.853530  108344 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0315 23:45:50.853540  108344 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0315 23:45:50.853546  108344 command_runner.go:130] > # certificate on any modification event.
	I0315 23:45:50.853550  108344 command_runner.go:130] > # metrics_cert = ""
	I0315 23:45:50.853561  108344 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0315 23:45:50.853568  108344 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0315 23:45:50.853573  108344 command_runner.go:130] > # metrics_key = ""
	I0315 23:45:50.853580  108344 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0315 23:45:50.853586  108344 command_runner.go:130] > [crio.tracing]
	I0315 23:45:50.853592  108344 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0315 23:45:50.853598  108344 command_runner.go:130] > # enable_tracing = false
	I0315 23:45:50.853603  108344 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0315 23:45:50.853609  108344 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0315 23:45:50.853616  108344 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0315 23:45:50.853623  108344 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0315 23:45:50.853641  108344 command_runner.go:130] > # CRI-O NRI configuration.
	I0315 23:45:50.853645  108344 command_runner.go:130] > [crio.nri]
	I0315 23:45:50.853650  108344 command_runner.go:130] > # Globally enable or disable NRI.
	I0315 23:45:50.853654  108344 command_runner.go:130] > # enable_nri = false
	I0315 23:45:50.853660  108344 command_runner.go:130] > # NRI socket to listen on.
	I0315 23:45:50.853665  108344 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0315 23:45:50.853672  108344 command_runner.go:130] > # NRI plugin directory to use.
	I0315 23:45:50.853676  108344 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0315 23:45:50.853683  108344 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0315 23:45:50.853689  108344 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0315 23:45:50.853696  108344 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0315 23:45:50.853703  108344 command_runner.go:130] > # nri_disable_connections = false
	I0315 23:45:50.853708  108344 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0315 23:45:50.853716  108344 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0315 23:45:50.853721  108344 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0315 23:45:50.853727  108344 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0315 23:45:50.853733  108344 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0315 23:45:50.853739  108344 command_runner.go:130] > [crio.stats]
	I0315 23:45:50.853744  108344 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0315 23:45:50.853754  108344 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0315 23:45:50.853760  108344 command_runner.go:130] > # stats_collection_period = 0
	I0315 23:45:50.853908  108344 cni.go:84] Creating CNI manager for ""
	I0315 23:45:50.853923  108344 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0315 23:45:50.853935  108344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0315 23:45:50.853961  108344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.5 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-658614 NodeName:multinode-658614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0315 23:45:50.854130  108344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-658614"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0315 23:45:50.854196  108344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0315 23:45:50.865054  108344 command_runner.go:130] > kubeadm
	I0315 23:45:50.865081  108344 command_runner.go:130] > kubectl
	I0315 23:45:50.865085  108344 command_runner.go:130] > kubelet
	I0315 23:45:50.865111  108344 binaries.go:44] Found k8s binaries, skipping transfer
	I0315 23:45:50.865160  108344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0315 23:45:50.875682  108344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0315 23:45:50.894627  108344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0315 23:45:50.912021  108344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0315 23:45:50.928642  108344 ssh_runner.go:195] Run: grep 192.168.39.5	control-plane.minikube.internal$ /etc/hosts
	I0315 23:45:50.932319  108344 command_runner.go:130] > 192.168.39.5	control-plane.minikube.internal
	I0315 23:45:50.932443  108344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0315 23:45:51.090396  108344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0315 23:45:51.105649  108344 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614 for IP: 192.168.39.5
	I0315 23:45:51.105676  108344 certs.go:194] generating shared ca certs ...
	I0315 23:45:51.105694  108344 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:45:51.105852  108344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0315 23:45:51.105893  108344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0315 23:45:51.105903  108344 certs.go:256] generating profile certs ...
	I0315 23:45:51.105982  108344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/client.key
	I0315 23:45:51.106049  108344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.key.cf0e8e33
	I0315 23:45:51.106123  108344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.key
	I0315 23:45:51.106139  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0315 23:45:51.106157  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0315 23:45:51.106168  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0315 23:45:51.106179  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0315 23:45:51.106188  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0315 23:45:51.106200  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0315 23:45:51.106213  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0315 23:45:51.106223  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0315 23:45:51.106271  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0315 23:45:51.106305  108344 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0315 23:45:51.106315  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0315 23:45:51.106336  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0315 23:45:51.106358  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0315 23:45:51.106381  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0315 23:45:51.106419  108344 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0315 23:45:51.106442  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.106455  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.106467  108344 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem -> /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.107010  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0315 23:45:51.134098  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0315 23:45:51.159098  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0315 23:45:51.184170  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0315 23:45:51.210428  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0315 23:45:51.235143  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0315 23:45:51.260488  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0315 23:45:51.285937  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/multinode-658614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0315 23:45:51.310353  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0315 23:45:51.334720  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0315 23:45:51.359551  108344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0315 23:45:51.384053  108344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0315 23:45:51.401067  108344 ssh_runner.go:195] Run: openssl version
	I0315 23:45:51.407868  108344 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0315 23:45:51.408005  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0315 23:45:51.422915  108344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.427778  108344 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.427825  108344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.427872  108344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0315 23:45:51.433651  108344 command_runner.go:130] > 3ec20f2e
	I0315 23:45:51.433720  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0315 23:45:51.443714  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0315 23:45:51.454991  108344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.459573  108344 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.459604  108344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.459662  108344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0315 23:45:51.465298  108344 command_runner.go:130] > b5213941
	I0315 23:45:51.465586  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0315 23:45:51.475612  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0315 23:45:51.486980  108344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.492113  108344 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.492134  108344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.492182  108344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0315 23:45:51.497783  108344 command_runner.go:130] > 51391683
	I0315 23:45:51.498047  108344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0315 23:45:51.508080  108344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:45:51.512802  108344 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0315 23:45:51.512832  108344 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0315 23:45:51.512841  108344 command_runner.go:130] > Device: 253,1	Inode: 3150397     Links: 1
	I0315 23:45:51.512851  108344 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0315 23:45:51.512867  108344 command_runner.go:130] > Access: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512877  108344 command_runner.go:130] > Modify: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512886  108344 command_runner.go:130] > Change: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512895  108344 command_runner.go:130] >  Birth: 2024-03-15 23:39:30.043615845 +0000
	I0315 23:45:51.512952  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0315 23:45:51.518594  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.518839  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0315 23:45:51.524497  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.524709  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0315 23:45:51.530492  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.530552  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0315 23:45:51.536519  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.536580  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0315 23:45:51.542454  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.542498  108344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0315 23:45:51.548338  108344 command_runner.go:130] > Certificate will not expire
	I0315 23:45:51.548390  108344 kubeadm.go:391] StartCluster: {Name:multinode-658614 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-658614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.5 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.156 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fa
lse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:45:51.548525  108344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0315 23:45:51.548603  108344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0315 23:45:51.585941  108344 command_runner.go:130] > c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a
	I0315 23:45:51.585965  108344 command_runner.go:130] > 06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395
	I0315 23:45:51.585971  108344 command_runner.go:130] > 909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b
	I0315 23:45:51.585980  108344 command_runner.go:130] > c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c
	I0315 23:45:51.586118  108344 command_runner.go:130] > c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1
	I0315 23:45:51.586144  108344 command_runner.go:130] > 4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e
	I0315 23:45:51.586200  108344 command_runner.go:130] > 632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870
	I0315 23:45:51.586280  108344 command_runner.go:130] > 83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02
	I0315 23:45:51.588116  108344 cri.go:89] found id: "c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a"
	I0315 23:45:51.588134  108344 cri.go:89] found id: "06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395"
	I0315 23:45:51.588139  108344 cri.go:89] found id: "909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b"
	I0315 23:45:51.588142  108344 cri.go:89] found id: "c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c"
	I0315 23:45:51.588145  108344 cri.go:89] found id: "c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1"
	I0315 23:45:51.588148  108344 cri.go:89] found id: "4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e"
	I0315 23:45:51.588151  108344 cri.go:89] found id: "632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870"
	I0315 23:45:51.588154  108344 cri.go:89] found id: "83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02"
	I0315 23:45:51.588156  108344 cri.go:89] found id: ""
	I0315 23:45:51.588198  108344 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.404491427Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546581404466765,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6005bf92-b7e2-4c9c-839b-ec6b283310a5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.405179010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fe52a18c-aff6-4084-913b-2ebc4b324cdf name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.405238023Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fe52a18c-aff6-4084-913b-2ebc4b324cdf name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.405799831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fe52a18c-aff6-4084-913b-2ebc4b324cdf name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.459208097Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09d972e4-ec55-42a5-ab52-711bdea91fcb name=/runtime.v1.RuntimeService/Version
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.459308047Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09d972e4-ec55-42a5-ab52-711bdea91fcb name=/runtime.v1.RuntimeService/Version
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.460750137Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=26832aec-67d9-4f4d-86b7-cbfe5c4fa8fe name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.461287171Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546581461263111,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=26832aec-67d9-4f4d-86b7-cbfe5c4fa8fe name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.461742397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b866de68-61df-4b97-b992-1b7c635a0991 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.461800537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b866de68-61df-4b97-b992-1b7c635a0991 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.462463094Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b866de68-61df-4b97-b992-1b7c635a0991 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.506152493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8888c322-cda0-494a-a94b-c08588e5b973 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.506233650Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8888c322-cda0-494a-a94b-c08588e5b973 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.507670268Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4c4076e3-f53b-410d-9865-d11666921872 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.508094649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546581508072477,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4c4076e3-f53b-410d-9865-d11666921872 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.509041527Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81c3ff1e-b056-49b7-9dd1-007ad2480b67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.509150454Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81c3ff1e-b056-49b7-9dd1-007ad2480b67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.509481110Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81c3ff1e-b056-49b7-9dd1-007ad2480b67 name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.550501517Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=44a4bc86-14f4-4f06-849c-d0281215f168 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.550598793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=44a4bc86-14f4-4f06-849c-d0281215f168 name=/runtime.v1.RuntimeService/Version
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.551924801Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=91572233-19ed-4653-8f76-550e207414f5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.552412775Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710546581552390052,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=91572233-19ed-4653-8f76-550e207414f5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.552994505Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=973b3d4f-ada5-40ef-99fe-1ac4308079df name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.553081723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=973b3d4f-ada5-40ef-99fe-1ac4308079df name=/runtime.v1.RuntimeService/ListContainers
	Mar 15 23:49:41 multinode-658614 crio[2852]: time="2024-03-15 23:49:41.553607863Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:dc42d8471a949b0b01365f4996762be75a5350e44b29a1a2a16ece7260d33ffa,PodSandboxId:d09919244e9c2e12206b11c9981ac997db8b76c69ed726b882ce5c37dd67bd08,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710546391311029437,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291,PodSandboxId:aaa54b04e6c90d91048b4e0b9465d6f657be08f8a3b0037c5e6cc68a4f8138e2,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710546357724727413,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5,PodSandboxId:f80eb860efd408a9c97595c2082035aa0733198611c75069eac266378909d034,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710546357689319891,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d2b0d61df2eb4419ad0357e8fc444a4df06bb8036f72620432d702ecafdbfde9,PodSandboxId:4d751dda4b627f53cd6efd733b5c9319704e659fb42760f2665eb69809488f32,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710546357610575845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},A
nnotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06,PodSandboxId:507f5f4ddce61b41624982d1e4980d14178a293aab844c63b96033e841818ba9,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710546357577558221,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.k
ubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2,PodSandboxId:acfb76c3886364622a9908027179e7746ac786b4aa46307ff51d915b1102664c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710546353845339537,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 650852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507,PodSandboxId:3b953de6379c2743e9fc9e4199e1afaf03060d8225093a96991fe78850025aa5,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710546353860676715,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.ku
bernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f,PodSandboxId:010abb516e30f7b69691dc0f6be397277b20115a1bab6afb9c3bda7a8fcbe708,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710546353835585160,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: c0078d26,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03,PodSandboxId:98f9ff3db2c59de9cd3ec1d8abe512e9a8a72662ebb0595fb92a144952ac8821,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710546353741303114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e814843722b36d210d9fe42ed45988d730c62a96e2832a729e97497f25c67c9,PodSandboxId:8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710546044007960199,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-92n6k,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f1af3d85-99fc-4912-b3ad-82ba68669470,},Annotations:map[string]string{io.kubernetes.container.hash: 930c0f18,io.kubernetes.container.r
estartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a,PodSandboxId:cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710545998817588899,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-svv8j,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 09dfcf49-6c59-4977-9640-f1a4d6821864,},Annotations:map[string]string{io.kubernetes.container.hash: 3e59aa1e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"conta
inerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a5e9d3986b9ca70392f19c7e0a740343b61784a00b6212e303b50ad3220395,PodSandboxId:257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710545997837880112,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.po
d.namespace: kube-system,io.kubernetes.pod.uid: 1e8e16bd-d511-4417-ac7a-5308ad831bf5,},Annotations:map[string]string{io.kubernetes.container.hash: 1bc5e0cc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b,PodSandboxId:02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710545996512442875,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-fbp4p,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: 2522127e-36a4-483f-8ede-5600caf9f295,},Annotations:map[string]string{io.kubernetes.container.hash: bd334a56,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c,PodSandboxId:af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710545993161177639,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-htvcb,io.kubernetes.pod.namespace: kube-system,io.kuberne
tes.pod.uid: 3b896986-5939-4631-8c76-b5d5159a4353,},Annotations:map[string]string{io.kubernetes.container.hash: 7445efd3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870,PodSandboxId:ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710545973412415491,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-658614,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: 561be46583aa09d103c7726ea003d0c9,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e,PodSandboxId:d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710545973425267430,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: ad72ccbcff6c402d10ba31c6081afcd8,},Annotations:map[string]string{io.kubernetes.container.hash: c0078d26,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1,PodSandboxId:e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710545973428742028,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
50852b8f83dd8b1bddbd7c262bccccf,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02,PodSandboxId:63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710545973337646842,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-658614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f5d895b743ed129ff72231a6f50da5fc,},Annotations
:map[string]string{io.kubernetes.container.hash: ab4a6ef3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=973b3d4f-ada5-40ef-99fe-1ac4308079df name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc42d8471a949       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   d09919244e9c2       busybox-5b5d89c9d6-92n6k
	cc0741cddf298       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   aaa54b04e6c90       kindnet-fbp4p
	0f0e444a07850       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   f80eb860efd40       coredns-5dd5756b68-svv8j
	d2b0d61df2eb4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   4d751dda4b627       storage-provisioner
	90fceaff63bd1       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   507f5f4ddce61       kube-proxy-htvcb
	0a5c1020bc422       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   3b953de6379c2       kube-controller-manager-multinode-658614
	a6bada5ba1ce5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   acfb76c388636       kube-scheduler-multinode-658614
	5db0e1e79f1b8       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   010abb516e30f       kube-apiserver-multinode-658614
	e0260f4b557b1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   98f9ff3db2c59       etcd-multinode-658614
	2e814843722b3       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   8be786cb8d36a       busybox-5b5d89c9d6-92n6k
	c4de486f1575d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   cd8e591d04d83       coredns-5dd5756b68-svv8j
	06a5e9d3986b9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   257fb4484d03d       storage-provisioner
	909ae95e34667       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   02c02adee17ad       kindnet-fbp4p
	c47cb7f221d82       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   af098ff74177e       kube-proxy-htvcb
	c906413d2f1ff       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   e1f0cefc865de       kube-scheduler-multinode-658614
	4241c39a188f5       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   d222d247f3f4b       kube-apiserver-multinode-658614
	632296766ea82       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   ce796ddbaee08       kube-controller-manager-multinode-658614
	83287ddb0e44f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   63c7df9fa65ef       etcd-multinode-658614
	
	
	==> coredns [0f0e444a078509804df45136a898ec38eff55bc664b1052d8580ecc7b3919bf5] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:51548 - 15259 "HINFO IN 6031034294552822681.752798960729125411. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.12685263s
	
	
	==> coredns [c4de486f1575de24cd17512e1f7de1a9f0f177a05069d3da3402df590f34bb3a] <==
	[INFO] 10.244.1.2:53078 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00163102s
	[INFO] 10.244.1.2:60668 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000082566s
	[INFO] 10.244.1.2:56629 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000111406s
	[INFO] 10.244.1.2:60947 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001004102s
	[INFO] 10.244.1.2:50606 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117165s
	[INFO] 10.244.1.2:60733 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000110786s
	[INFO] 10.244.1.2:48408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068872s
	[INFO] 10.244.0.3:54069 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000112242s
	[INFO] 10.244.0.3:43363 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000108029s
	[INFO] 10.244.0.3:55139 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000086624s
	[INFO] 10.244.0.3:34718 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0001326s
	[INFO] 10.244.1.2:39766 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000156811s
	[INFO] 10.244.1.2:53188 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000110977s
	[INFO] 10.244.1.2:40890 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117994s
	[INFO] 10.244.1.2:56357 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000106009s
	[INFO] 10.244.0.3:51288 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116825s
	[INFO] 10.244.0.3:56174 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000178603s
	[INFO] 10.244.0.3:53162 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000135355s
	[INFO] 10.244.0.3:48700 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000079412s
	[INFO] 10.244.1.2:45984 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000154255s
	[INFO] 10.244.1.2:54555 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000201559s
	[INFO] 10.244.1.2:41306 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000093647s
	[INFO] 10.244.1.2:34839 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000139006s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-658614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-658614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=multinode-658614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_15T23_39_40_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:39:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-658614
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:49:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:34 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:34 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:34 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 15 Mar 2024 23:45:56 +0000   Fri, 15 Mar 2024 23:39:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.5
	  Hostname:    multinode-658614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 368493f35fdf43e6a1515de1070ad8f9
	  System UUID:                368493f3-5fdf-43e6-a151-5de1070ad8f9
	  Boot ID:                    5e2adf93-f7fb-413e-8e50-0831904af602
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-92n6k                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m
	  kube-system                 coredns-5dd5756b68-svv8j                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m49s
	  kube-system                 etcd-multinode-658614                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-fbp4p                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m49s
	  kube-system                 kube-apiserver-multinode-658614             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-658614    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-htvcb                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	  kube-system                 kube-scheduler-multinode-658614             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m48s                  kube-proxy       
	  Normal  Starting                 3m43s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-658614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-658614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-658614 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m49s                  node-controller  Node multinode-658614 event: Registered Node multinode-658614 in Controller
	  Normal  NodeReady                9m44s                  kubelet          Node multinode-658614 status is now: NodeReady
	  Normal  Starting                 3m48s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m48s (x8 over 3m48s)  kubelet          Node multinode-658614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m48s (x8 over 3m48s)  kubelet          Node multinode-658614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m48s (x7 over 3m48s)  kubelet          Node multinode-658614 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m48s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m32s                  node-controller  Node multinode-658614 event: Registered Node multinode-658614 in Controller
	
	
	Name:               multinode-658614-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-658614-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=multinode-658614
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_15T23_46_40_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 15 Mar 2024 23:46:39 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-658614-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 15 Mar 2024 23:47:20 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:48:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:48:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:48:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 15 Mar 2024 23:47:10 +0000   Fri, 15 Mar 2024 23:48:04 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    multinode-658614-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 462735656ec24d819e5b2ae09c8dcc97
	  System UUID:                46273565-6ec2-4d81-9e5b-2ae09c8dcc97
	  Boot ID:                    ca6582ea-21cb-41f7-9eb4-1b75bd144789
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-ljljd    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-f9785               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m10s
	  kube-system                 kube-proxy-ph8fc            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m6s                   kube-proxy       
	  Normal  Starting                 2m59s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m10s (x5 over 9m11s)  kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m10s (x5 over 9m11s)  kubelet          Node multinode-658614-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m10s (x5 over 9m11s)  kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m2s                   kubelet          Node multinode-658614-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m2s (x5 over 3m3s)    kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x5 over 3m3s)    kubelet          Node multinode-658614-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x5 over 3m3s)    kubelet          Node multinode-658614-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m57s                  node-controller  Node multinode-658614-m02 event: Registered Node multinode-658614-m02 in Controller
	  Normal  NodeReady                2m55s                  kubelet          Node multinode-658614-m02 status is now: NodeReady
	  Normal  NodeNotReady             97s                    node-controller  Node multinode-658614-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.177690] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.135727] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.257355] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +4.748079] systemd-fstab-generator[753]: Ignoring "noauto" option for root device
	[  +0.061271] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.219362] systemd-fstab-generator[935]: Ignoring "noauto" option for root device
	[  +1.037450] kauditd_printk_skb: 57 callbacks suppressed
	[  +6.229095] systemd-fstab-generator[1272]: Ignoring "noauto" option for root device
	[  +0.074710] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.527421] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.162074] systemd-fstab-generator[1522]: Ignoring "noauto" option for root device
	[  +5.495006] kauditd_printk_skb: 70 callbacks suppressed
	[Mar15 23:40] kauditd_printk_skb: 4 callbacks suppressed
	[Mar15 23:45] systemd-fstab-generator[2775]: Ignoring "noauto" option for root device
	[  +0.144628] systemd-fstab-generator[2787]: Ignoring "noauto" option for root device
	[  +0.167553] systemd-fstab-generator[2801]: Ignoring "noauto" option for root device
	[  +0.149284] systemd-fstab-generator[2813]: Ignoring "noauto" option for root device
	[  +0.246181] systemd-fstab-generator[2837]: Ignoring "noauto" option for root device
	[  +6.090323] systemd-fstab-generator[2936]: Ignoring "noauto" option for root device
	[  +0.083738] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.713553] systemd-fstab-generator[3062]: Ignoring "noauto" option for root device
	[  +4.701931] kauditd_printk_skb: 74 callbacks suppressed
	[Mar15 23:46] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.834913] systemd-fstab-generator[3879]: Ignoring "noauto" option for root device
	[ +17.929648] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [83287ddb0e44ffb41e848ba7a264c8d418f77b5044289cdc5072240dc7acee02] <==
	{"level":"info","ts":"2024-03-15T23:39:34.663773Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.5:2379"}
	{"level":"info","ts":"2024-03-15T23:39:34.666535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:39:34.666763Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:39:34.669206Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:39:34.667186Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T23:39:34.670176Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T23:39:34.670214Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T23:39:34.677621Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-03-15T23:41:16.516378Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"150.693981ms","expected-duration":"100ms","prefix":"","request":"header:<ID:154123237053274849 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/certificatesigningrequests/csr-jldtv\" mod_revision:571 > success:<request_put:<key:\"/registry/certificatesigningrequests/csr-jldtv\" value_size:2292 >> failure:<request_range:<key:\"/registry/certificatesigningrequests/csr-jldtv\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-15T23:41:16.517343Z","caller":"traceutil/trace.go:171","msg":"trace[61032887] transaction","detail":"{read_only:false; response_revision:572; number_of_response:1; }","duration":"205.790238ms","start":"2024-03-15T23:41:16.311496Z","end":"2024-03-15T23:41:16.517287Z","steps":["trace[61032887] 'process raft request'  (duration: 53.255979ms)","trace[61032887] 'compare'  (duration: 150.569607ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-15T23:41:20.817277Z","caller":"traceutil/trace.go:171","msg":"trace[857755669] linearizableReadLoop","detail":"{readStateIndex:631; appliedIndex:630; }","duration":"116.767561ms","start":"2024-03-15T23:41:20.700487Z","end":"2024-03-15T23:41:20.817255Z","steps":["trace[857755669] 'read index received'  (duration: 116.542026ms)","trace[857755669] 'applied index is now lower than readState.Index'  (duration: 225.126µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-15T23:41:20.817538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"117.026022ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-15T23:41:20.817645Z","caller":"traceutil/trace.go:171","msg":"trace[2024657246] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:600; }","duration":"117.169169ms","start":"2024-03-15T23:41:20.700462Z","end":"2024-03-15T23:41:20.817631Z","steps":["trace[2024657246] 'agreement among raft nodes before linearized reading'  (duration: 117.005368ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:41:20.81756Z","caller":"traceutil/trace.go:171","msg":"trace[1480458444] transaction","detail":"{read_only:false; response_revision:600; number_of_response:1; }","duration":"220.291088ms","start":"2024-03-15T23:41:20.597246Z","end":"2024-03-15T23:41:20.817537Z","steps":["trace[1480458444] 'process raft request'  (duration: 219.829193ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:41:21.236646Z","caller":"traceutil/trace.go:171","msg":"trace[1257671243] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"142.999904ms","start":"2024-03-15T23:41:21.093631Z","end":"2024-03-15T23:41:21.236631Z","steps":["trace[1257671243] 'process raft request'  (duration: 142.870186ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-15T23:44:12.766553Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-15T23:44:12.766748Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-658614","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	{"level":"warn","ts":"2024-03-15T23:44:12.767011Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:44:12.776259Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:44:12.849042Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-15T23:44:12.849162Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-15T23:44:12.849325Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"c5263387c79c0223","current-leader-member-id":"c5263387c79c0223"}
	{"level":"info","ts":"2024-03-15T23:44:12.852469Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:44:12.852643Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:44:12.852681Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-658614","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"]}
	
	
	==> etcd [e0260f4b557b1c3224effe3ab6b42acc211665b40892c3f3e4d8bec03b548b03] <==
	{"level":"info","ts":"2024-03-15T23:45:54.102387Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T23:45:54.1025Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-15T23:45:54.103419Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 switched to configuration voters=(14206098732849300003)"}
	{"level":"info","ts":"2024-03-15T23:45:54.103646Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","added-peer-id":"c5263387c79c0223","added-peer-peer-urls":["https://192.168.39.5:2380"]}
	{"level":"info","ts":"2024-03-15T23:45:54.104016Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"436188ec3031a10e","local-member-id":"c5263387c79c0223","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:45:54.106219Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-15T23:45:54.128991Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-15T23:45:54.129322Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c5263387c79c0223","initial-advertise-peer-urls":["https://192.168.39.5:2380"],"listen-peer-urls":["https://192.168.39.5:2380"],"advertise-client-urls":["https://192.168.39.5:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.5:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-15T23:45:54.129379Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-15T23:45:54.129465Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:45:54.129472Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.5:2380"}
	{"level":"info","ts":"2024-03-15T23:45:55.362441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-15T23:45:55.362482Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-15T23:45:55.362496Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgPreVoteResp from c5263387c79c0223 at term 2"}
	{"level":"info","ts":"2024-03-15T23:45:55.362507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became candidate at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.362513Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 received MsgVoteResp from c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.362522Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c5263387c79c0223 became leader at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.362547Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c5263387c79c0223 elected leader c5263387c79c0223 at term 3"}
	{"level":"info","ts":"2024-03-15T23:45:55.36528Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c5263387c79c0223","local-member-attributes":"{Name:multinode-658614 ClientURLs:[https://192.168.39.5:2379]}","request-path":"/0/members/c5263387c79c0223/attributes","cluster-id":"436188ec3031a10e","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-15T23:45:55.365293Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T23:45:55.365531Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-15T23:45:55.366899Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-15T23:45:55.367202Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-15T23:45:55.367235Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-15T23:45:55.366901Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.5:2379"}
	
	
	==> kernel <==
	 23:49:42 up 10 min,  0 users,  load average: 0.15, 0.15, 0.09
	Linux multinode-658614 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [909ae95e346677645ae3cc5a9810697c041a5729f956d923d20f703852d1f67b] <==
	I0315 23:43:27.392321       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:43:37.407644       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:43:37.407706       1 main.go:227] handling current node
	I0315 23:43:37.407721       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:43:37.407730       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:43:37.407886       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:43:37.407924       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:43:47.424045       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:43:47.424261       1 main.go:227] handling current node
	I0315 23:43:47.424369       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:43:47.424397       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:43:47.424740       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:43:47.424831       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:43:57.431806       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:43:57.431965       1 main.go:227] handling current node
	I0315 23:43:57.432003       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:43:57.432085       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:43:57.432404       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:43:57.432436       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	I0315 23:44:07.446336       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:44:07.446495       1 main.go:227] handling current node
	I0315 23:44:07.446532       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:44:07.446555       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:44:07.446723       1 main.go:223] Handling node with IPs: map[192.168.39.156:{}]
	I0315 23:44:07.446743       1 main.go:250] Node multinode-658614-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [cc0741cddf298d678f3b22b4afdb01658e0777970756f7734f6cd22f144ed291] <==
	I0315 23:48:38.903383       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:48:48.916842       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:48:48.916886       1 main.go:227] handling current node
	I0315 23:48:48.916910       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:48:48.916915       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:48:58.922223       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:48:58.922359       1 main.go:227] handling current node
	I0315 23:48:58.922383       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:48:58.922403       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:49:08.936635       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:49:08.936693       1 main.go:227] handling current node
	I0315 23:49:08.936709       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:49:08.936719       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:49:18.943642       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:49:18.943751       1 main.go:227] handling current node
	I0315 23:49:18.943774       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:49:18.943792       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:49:28.956748       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:49:28.956878       1 main.go:227] handling current node
	I0315 23:49:28.956902       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:49:28.956920       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	I0315 23:49:38.962733       1 main.go:223] Handling node with IPs: map[192.168.39.5:{}]
	I0315 23:49:38.962880       1 main.go:227] handling current node
	I0315 23:49:38.962984       1 main.go:223] Handling node with IPs: map[192.168.39.215:{}]
	I0315 23:49:38.963042       1 main.go:250] Node multinode-658614-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [4241c39a188f5701edbb5dea70e38e05b2d94253c28f7cc0f7a62e5b52ba9c0e] <==
	W0315 23:44:12.768583       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.768656       1 logging.go:59] [core] [Channel #115 SubChannel #116] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.768903       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.770288       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.785584       1 logging.go:59] [core] [Channel #19 SubChannel #20] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.790092       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.790615       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.791624       1 logging.go:59] [core] [Channel #121 SubChannel #122] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.792608       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.792920       1 logging.go:59] [core] [Channel #79 SubChannel #80] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.793032       1 logging.go:59] [core] [Channel #118 SubChannel #119] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.793153       1 logging.go:59] [core] [Channel #82 SubChannel #83] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.796565       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.796652       1 logging.go:59] [core] [Channel #22 SubChannel #23] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798499       1 logging.go:59] [core] [Channel #109 SubChannel #110] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798692       1 logging.go:59] [core] [Channel #139 SubChannel #140] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798772       1 logging.go:59] [core] [Channel #49 SubChannel #50] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.798854       1 logging.go:59] [core] [Channel #10 SubChannel #11] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.799842       1 logging.go:59] [core] [Channel #67 SubChannel #68] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.799938       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800000       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800053       1 logging.go:59] [core] [Channel #103 SubChannel #104] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800166       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0315 23:44:12.800408       1 logging.go:59] [core] [Channel #172 SubChannel #173] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	E0315 23:44:12.803490       1 controller.go:131] Unable to remove endpoints from kubernetes service: rpc error: code = Unknown desc = malformed header: missing HTTP content-type
	
	
	==> kube-apiserver [5db0e1e79f1b8e4281472177811e9f4a1146055c0d0fcdd9ea9c60182a6ba08f] <==
	I0315 23:45:56.766689       1 establishing_controller.go:76] Starting EstablishingController
	I0315 23:45:56.766801       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0315 23:45:56.766902       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0315 23:45:56.767003       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0315 23:45:56.831285       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0315 23:45:56.843703       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0315 23:45:56.852752       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0315 23:45:56.860907       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0315 23:45:56.861091       1 shared_informer.go:318] Caches are synced for configmaps
	I0315 23:45:56.863211       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0315 23:45:56.863220       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0315 23:45:56.864182       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0315 23:45:56.864826       1 aggregator.go:166] initial CRD sync complete...
	I0315 23:45:56.864878       1 autoregister_controller.go:141] Starting autoregister controller
	I0315 23:45:56.864901       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0315 23:45:56.864924       1 cache.go:39] Caches are synced for autoregister controller
	I0315 23:45:56.885808       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0315 23:45:57.765952       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0315 23:45:59.422152       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0315 23:45:59.553225       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0315 23:45:59.564794       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0315 23:45:59.635909       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0315 23:45:59.646775       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0315 23:46:09.464284       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0315 23:46:09.567477       1 controller.go:624] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0a5c1020bc4220b2ac92e99180c8666cc73285e4aab151eae4386535e5103507] <==
	I0315 23:46:40.259782       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.412µs"
	I0315 23:46:44.368262       1 event.go:307] "Event occurred" object="multinode-658614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-658614-m02 event: Registered Node multinode-658614-m02 in Controller"
	I0315 23:46:46.886780       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:46:46.907783       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="70.992µs"
	I0315 23:46:46.923163       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="58.114µs"
	I0315 23:46:49.381397       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ljljd" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-ljljd"
	I0315 23:46:49.471279       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="11.748032ms"
	I0315 23:46:49.472408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="72.664µs"
	I0315 23:47:05.097402       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:47:07.718775       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m03\" does not exist"
	I0315 23:47:07.721277       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:47:07.758650       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m03" podCIDRs=["10.244.2.0/24"]
	I0315 23:47:14.366321       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:47:20.068822       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:47:24.401994       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-658614-m03 event: Removing Node multinode-658614-m03 from Controller"
	I0315 23:48:04.420928       1 event.go:307] "Event occurred" object="multinode-658614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-658614-m02 status is now: NodeNotReady"
	I0315 23:48:04.434732       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-ljljd" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:48:04.451824       1 event.go:307] "Event occurred" object="kube-system/kindnet-f9785" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:48:04.458455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.574748ms"
	I0315 23:48:04.458555       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.271µs"
	I0315 23:48:04.464663       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ph8fc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:48:09.317573       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-w9gns"
	I0315 23:48:09.345169       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-w9gns"
	I0315 23:48:09.345259       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-lfstz"
	I0315 23:48:09.368036       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-lfstz"
	
	
	==> kube-controller-manager [632296766ea82d40354e44cd48097f0d136293dddb81f069374fea91b49c8870] <==
	I0315 23:41:17.814595       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m03\" does not exist"
	I0315 23:41:17.814719       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:17.831219       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m03" podCIDRs=["10.244.2.0/24"]
	I0315 23:41:17.853340       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-lfstz"
	I0315 23:41:17.861693       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-w9gns"
	I0315 23:41:22.584640       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-658614-m03"
	I0315 23:41:22.584914       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-658614-m03 event: Registered Node multinode-658614-m03 in Controller"
	I0315 23:41:26.132281       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:57.558488       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:57.606499       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-658614-m03 event: Removing Node multinode-658614-m03 from Controller"
	I0315 23:41:59.992521       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:41:59.992631       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-658614-m03\" does not exist"
	I0315 23:42:00.014801       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-658614-m03" podCIDRs=["10.244.3.0/24"]
	I0315 23:42:02.607968       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-658614-m03 event: Registered Node multinode-658614-m03 in Controller"
	I0315 23:42:06.839470       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m03"
	I0315 23:42:47.641439       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-658614-m02"
	I0315 23:42:47.642244       1 event.go:307] "Event occurred" object="multinode-658614-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-658614-m03 status is now: NodeNotReady"
	I0315 23:42:47.650947       1 event.go:307] "Event occurred" object="multinode-658614-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-658614-m02 status is now: NodeNotReady"
	I0315 23:42:47.662296       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-lfstz" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.669477       1 event.go:307] "Event occurred" object="kube-system/kindnet-f9785" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.677607       1 event.go:307] "Event occurred" object="kube-system/kindnet-w9gns" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.685155       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-ph8fc" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.700933       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-r8z86" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0315 23:42:47.715047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="16.817959ms"
	I0315 23:42:47.715823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="133.876µs"
	
	
	==> kube-proxy [90fceaff63bd17735cbd9a0869f57d058e9831e02df71871233dc6d08b904c06] <==
	I0315 23:45:57.999551       1 server_others.go:69] "Using iptables proxy"
	I0315 23:45:58.019787       1 node.go:141] Successfully retrieved node IP: 192.168.39.5
	I0315 23:45:58.084363       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:45:58.084416       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:45:58.089740       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:45:58.089829       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:45:58.089996       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:45:58.090027       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:45:58.095661       1 config.go:188] "Starting service config controller"
	I0315 23:45:58.095726       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:45:58.095771       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:45:58.095796       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:45:58.096492       1 config.go:315] "Starting node config controller"
	I0315 23:45:58.096525       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 23:45:58.196266       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:45:58.196605       1 shared_informer.go:318] Caches are synced for node config
	I0315 23:45:58.196900       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [c47cb7f221d82cbb3d54703a58b8f81d7b56ee02b47f70cc01eeb360ca41944c] <==
	I0315 23:39:53.498752       1 server_others.go:69] "Using iptables proxy"
	I0315 23:39:53.519363       1 node.go:141] Successfully retrieved node IP: 192.168.39.5
	I0315 23:39:53.604885       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0315 23:39:53.604937       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0315 23:39:53.613345       1 server_others.go:152] "Using iptables Proxier"
	I0315 23:39:53.613407       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0315 23:39:53.613562       1 server.go:846] "Version info" version="v1.28.4"
	I0315 23:39:53.613595       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:39:53.614885       1 config.go:188] "Starting service config controller"
	I0315 23:39:53.614931       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0315 23:39:53.614951       1 config.go:97] "Starting endpoint slice config controller"
	I0315 23:39:53.614954       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0315 23:39:53.615390       1 config.go:315] "Starting node config controller"
	I0315 23:39:53.615398       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0315 23:39:53.715164       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0315 23:39:53.715223       1 shared_informer.go:318] Caches are synced for service config
	I0315 23:39:53.715532       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [a6bada5ba1ce53d1fdedd94e79916c3002cca745aeb4e6cfdf55cf5bf9b877c2] <==
	I0315 23:45:54.862918       1 serving.go:348] Generated self-signed cert in-memory
	W0315 23:45:56.793775       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0315 23:45:56.794435       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 23:45:56.794494       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0315 23:45:56.794520       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0315 23:45:56.829569       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0315 23:45:56.829663       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0315 23:45:56.831418       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0315 23:45:56.831599       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0315 23:45:56.837442       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0315 23:45:56.837569       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0315 23:45:56.932412       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [c906413d2f1ff2993d319d23b33fcedaa62273e29d22f88849e1b5a0427e52b1] <==
	E0315 23:39:36.403029       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:39:36.403037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 23:39:36.403089       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0315 23:39:36.403143       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 23:39:36.403149       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0315 23:39:36.403589       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0315 23:39:36.403631       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0315 23:39:37.214063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0315 23:39:37.214156       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0315 23:39:37.234740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0315 23:39:37.234789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0315 23:39:37.275918       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0315 23:39:37.275963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0315 23:39:37.289024       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0315 23:39:37.289201       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0315 23:39:37.533055       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0315 23:39:37.533213       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0315 23:39:37.591528       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0315 23:39:37.591645       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0315 23:39:37.603303       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0315 23:39:37.603353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0315 23:39:37.665797       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0315 23:39:37.665847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0315 23:39:40.191800       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0315 23:44:12.778359       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.108799    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf5d895b743ed129ff72231a6f50da5fc/crio-63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a: Error finding container 63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a: Status 404 returned error can't find the container with id 63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.109168    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podf1af3d85-99fc-4912-b3ad-82ba68669470/crio-8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5: Error finding container 8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5: Status 404 returned error can't find the container with id 8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.109521    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod561be46583aa09d103c7726ea003d0c9/crio-ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22: Error finding container ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22: Status 404 returned error can't find the container with id ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.109885    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod650852b8f83dd8b1bddbd7c262bccccf/crio-e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94: Error finding container e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94: Status 404 returned error can't find the container with id e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.110203    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod1e8e16bd-d511-4417-ac7a-5308ad831bf5/crio-257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f: Error finding container 257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f: Status 404 returned error can't find the container with id 257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.110497    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod3b896986-5939-4631-8c76-b5d5159a4353/crio-af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba: Error finding container af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba: Status 404 returned error can't find the container with id af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba
	Mar 15 23:47:53 multinode-658614 kubelet[3069]: E0315 23:47:53.123466    3069 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:47:53 multinode-658614 kubelet[3069]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:47:53 multinode-658614 kubelet[3069]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:47:53 multinode-658614 kubelet[3069]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:47:53 multinode-658614 kubelet[3069]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.107842    3069 manager.go:1106] Failed to create existing container: /kubepods/pod2522127e-36a4-483f-8ede-5600caf9f295/crio-02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c: Error finding container 02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c: Status 404 returned error can't find the container with id 02c02adee17ade370b46fa56ac3a8125501377ab0b9b35e6b26687cf2500269c
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.108226    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/podad72ccbcff6c402d10ba31c6081afcd8/crio-d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc: Error finding container d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc: Status 404 returned error can't find the container with id d222d247f3f4bfcfc5bceca82e1308be4a1c6af25cf41f4aae48fe5c6e4861fc
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.108577    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod3b896986-5939-4631-8c76-b5d5159a4353/crio-af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba: Error finding container af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba: Status 404 returned error can't find the container with id af098ff74177e270ebbfcd1c9a3a94583e9b68504025bb422a5f8d677a21f4ba
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.108824    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/podf5d895b743ed129ff72231a6f50da5fc/crio-63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a: Error finding container 63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a: Status 404 returned error can't find the container with id 63c7df9fa65ef8355cdf303b0b401f226a8184aa0c2afcad2c0b0a144d8df60a
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.108987    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podf1af3d85-99fc-4912-b3ad-82ba68669470/crio-8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5: Error finding container 8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5: Status 404 returned error can't find the container with id 8be786cb8d36ab3816fd2413cccbe328198a1dccafbed119a0d6ac968d486ce5
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.109203    3069 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod1e8e16bd-d511-4417-ac7a-5308ad831bf5/crio-257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f: Error finding container 257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f: Status 404 returned error can't find the container with id 257fb4484d03dfa8d4c6f355832e67d59706b02b849116b2e4e798119e3e407f
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.109339    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod650852b8f83dd8b1bddbd7c262bccccf/crio-e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94: Error finding container e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94: Status 404 returned error can't find the container with id e1f0cefc865deb2486aeb903cdac4c6d67856727b0a53292d8bd0a533621fe94
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.109467    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod09dfcf49-6c59-4977-9640-f1a4d6821864/crio-cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484: Error finding container cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484: Status 404 returned error can't find the container with id cd8e591d04d8399d1ddd5831fcb91b607012de095bb7152b09801bfd8500b484
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.109545    3069 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod561be46583aa09d103c7726ea003d0c9/crio-ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22: Error finding container ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22: Status 404 returned error can't find the container with id ce796ddbaee0890dddffe88152a1adf102bae6093a58f9107e85e6996e7edf22
	Mar 15 23:48:53 multinode-658614 kubelet[3069]: E0315 23:48:53.122860    3069 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 15 23:48:53 multinode-658614 kubelet[3069]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 15 23:48:53 multinode-658614 kubelet[3069]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 15 23:48:53 multinode-658614 kubelet[3069]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 15 23:48:53 multinode-658614 kubelet[3069]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 23:49:41.111997  109874 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/17991-75602/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-658614 -n multinode-658614
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-658614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.59s)

                                                
                                    
x
+
TestPreload (280.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-069578 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0315 23:53:58.905806   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:54:08.402931   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-069578 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m18.187458706s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-069578 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-069578 image pull gcr.io/k8s-minikube/busybox: (1.752057175s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-069578
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-069578: exit status 82 (2m0.486213927s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-069578"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-069578 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-03-15 23:57:46.077258479 +0000 UTC m=+3696.500109742
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-069578 -n test-preload-069578
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-069578 -n test-preload-069578: exit status 3 (18.467251495s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0315 23:58:04.539631  112267 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host
	E0315 23:58:04.539653  112267 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.225:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-069578" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-069578" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-069578
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-069578: (1.111641321s)
--- FAIL: TestPreload (280.01s)

                                                
                                    
x
+
TestKubernetesUpgrade (384.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m30.901844053s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-209767] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-209767" primary control-plane node in "kubernetes-upgrade-209767" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:59:58.403850  113182 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:59:58.404352  113182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:59:58.404361  113182 out.go:304] Setting ErrFile to fd 2...
	I0315 23:59:58.404366  113182 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:59:58.404552  113182 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:59:58.405072  113182 out.go:298] Setting JSON to false
	I0315 23:59:58.405812  113182 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9748,"bootTime":1710537450,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:59:58.405864  113182 start.go:139] virtualization: kvm guest
	I0315 23:59:58.407584  113182 out.go:177] * [kubernetes-upgrade-209767] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:59:58.410977  113182 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:59:58.409387  113182 notify.go:220] Checking for updates...
	I0315 23:59:58.414071  113182 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:59:58.416375  113182 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:59:58.418704  113182 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:59:58.421196  113182 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:59:58.423712  113182 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:59:58.425250  113182 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:59:58.463600  113182 out.go:177] * Using the kvm2 driver based on user configuration
	I0315 23:59:58.464832  113182 start.go:297] selected driver: kvm2
	I0315 23:59:58.464847  113182 start.go:901] validating driver "kvm2" against <nil>
	I0315 23:59:58.464861  113182 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:59:58.465838  113182 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:59:58.465934  113182 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 23:59:58.482763  113182 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 23:59:58.482809  113182 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 23:59:58.483072  113182 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 23:59:58.483135  113182 cni.go:84] Creating CNI manager for ""
	I0315 23:59:58.483160  113182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 23:59:58.483172  113182 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 23:59:58.483239  113182 start.go:340] cluster config:
	{Name:kubernetes-upgrade-209767 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-209767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:59:58.483405  113182 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 23:59:58.485298  113182 out.go:177] * Starting "kubernetes-upgrade-209767" primary control-plane node in "kubernetes-upgrade-209767" cluster
	I0315 23:59:58.486599  113182 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 23:59:58.486642  113182 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 23:59:58.486651  113182 cache.go:56] Caching tarball of preloaded images
	I0315 23:59:58.486734  113182 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0315 23:59:58.486745  113182 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 23:59:58.487154  113182 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/config.json ...
	I0315 23:59:58.487185  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/config.json: {Name:mk91df6a70d30cddc8abafe1074695a1b7d7192f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 23:59:58.487348  113182 start.go:360] acquireMachinesLock for kubernetes-upgrade-209767: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0315 23:59:58.487398  113182 start.go:364] duration metric: took 31.158µs to acquireMachinesLock for "kubernetes-upgrade-209767"
	I0315 23:59:58.487420  113182 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-209767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-209767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0315 23:59:58.487486  113182 start.go:125] createHost starting for "" (driver="kvm2")
	I0315 23:59:58.489321  113182 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0315 23:59:58.489437  113182 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:59:58.489471  113182 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:59:58.504327  113182 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41369
	I0315 23:59:58.504890  113182 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:59:58.505589  113182 main.go:141] libmachine: Using API Version  1
	I0315 23:59:58.505607  113182 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:59:58.505965  113182 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:59:58.506140  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetMachineName
	I0315 23:59:58.506350  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0315 23:59:58.506530  113182 start.go:159] libmachine.API.Create for "kubernetes-upgrade-209767" (driver="kvm2")
	I0315 23:59:58.506561  113182 client.go:168] LocalClient.Create starting
	I0315 23:59:58.506594  113182 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0315 23:59:58.506627  113182 main.go:141] libmachine: Decoding PEM data...
	I0315 23:59:58.506648  113182 main.go:141] libmachine: Parsing certificate...
	I0315 23:59:58.506707  113182 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0315 23:59:58.506741  113182 main.go:141] libmachine: Decoding PEM data...
	I0315 23:59:58.506763  113182 main.go:141] libmachine: Parsing certificate...
	I0315 23:59:58.506784  113182 main.go:141] libmachine: Running pre-create checks...
	I0315 23:59:58.506795  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .PreCreateCheck
	I0315 23:59:58.507144  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetConfigRaw
	I0315 23:59:58.507587  113182 main.go:141] libmachine: Creating machine...
	I0315 23:59:58.507601  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .Create
	I0315 23:59:58.507730  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Creating KVM machine...
	I0315 23:59:58.509212  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found existing default KVM network
	I0315 23:59:58.510003  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0315 23:59:58.509786  113243 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000014700}
	I0315 23:59:58.510039  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | created network xml: 
	I0315 23:59:58.510064  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | <network>
	I0315 23:59:58.510076  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |   <name>mk-kubernetes-upgrade-209767</name>
	I0315 23:59:58.510086  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |   <dns enable='no'/>
	I0315 23:59:58.510092  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |   
	I0315 23:59:58.510102  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0315 23:59:58.510106  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |     <dhcp>
	I0315 23:59:58.510113  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0315 23:59:58.510118  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |     </dhcp>
	I0315 23:59:58.510144  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |   </ip>
	I0315 23:59:58.510163  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG |   
	I0315 23:59:58.510176  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | </network>
	I0315 23:59:58.510180  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | 
	I0315 23:59:58.514853  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | trying to create private KVM network mk-kubernetes-upgrade-209767 192.168.39.0/24...
	I0315 23:59:58.596484  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | private KVM network mk-kubernetes-upgrade-209767 192.168.39.0/24 created
	I0315 23:59:58.596515  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767 ...
	I0315 23:59:58.596542  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0315 23:59:58.596472  113243 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:59:58.596560  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 23:59:58.596629  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0315 23:59:58.834157  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0315 23:59:58.833981  113243 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa...
	I0315 23:59:58.920547  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0315 23:59:58.920375  113243 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/kubernetes-upgrade-209767.rawdisk...
	I0315 23:59:58.920573  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Writing magic tar header
	I0315 23:59:58.920596  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Writing SSH key tar header
	I0315 23:59:58.920624  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0315 23:59:58.920522  113243 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767 ...
	I0315 23:59:58.920639  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767
	I0315 23:59:58.920663  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0315 23:59:58.920711  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767 (perms=drwx------)
	I0315 23:59:58.920737  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:59:58.920754  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0315 23:59:58.920803  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0315 23:59:58.920828  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0315 23:59:58.920845  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home/jenkins
	I0315 23:59:58.920854  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Checking permissions on dir: /home
	I0315 23:59:58.920868  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0315 23:59:58.920881  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Skipping /home - not owner
	I0315 23:59:58.920910  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0315 23:59:58.920964  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0315 23:59:58.920978  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0315 23:59:58.920996  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Creating domain...
	I0315 23:59:58.921962  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) define libvirt domain using xml: 
	I0315 23:59:58.921997  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) <domain type='kvm'>
	I0315 23:59:58.922031  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <name>kubernetes-upgrade-209767</name>
	I0315 23:59:58.922049  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <memory unit='MiB'>2200</memory>
	I0315 23:59:58.922092  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <vcpu>2</vcpu>
	I0315 23:59:58.922111  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <features>
	I0315 23:59:58.922117  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <acpi/>
	I0315 23:59:58.922122  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <apic/>
	I0315 23:59:58.922128  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <pae/>
	I0315 23:59:58.922133  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     
	I0315 23:59:58.922153  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   </features>
	I0315 23:59:58.922166  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <cpu mode='host-passthrough'>
	I0315 23:59:58.922175  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   
	I0315 23:59:58.922182  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   </cpu>
	I0315 23:59:58.922190  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <os>
	I0315 23:59:58.922195  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <type>hvm</type>
	I0315 23:59:58.922216  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <boot dev='cdrom'/>
	I0315 23:59:58.922238  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <boot dev='hd'/>
	I0315 23:59:58.922252  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <bootmenu enable='no'/>
	I0315 23:59:58.922264  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   </os>
	I0315 23:59:58.922276  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   <devices>
	I0315 23:59:58.922289  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <disk type='file' device='cdrom'>
	I0315 23:59:58.922303  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/boot2docker.iso'/>
	I0315 23:59:58.922330  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <target dev='hdc' bus='scsi'/>
	I0315 23:59:58.922340  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <readonly/>
	I0315 23:59:58.922352  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </disk>
	I0315 23:59:58.922362  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <disk type='file' device='disk'>
	I0315 23:59:58.922372  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0315 23:59:58.922390  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/kubernetes-upgrade-209767.rawdisk'/>
	I0315 23:59:58.922408  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <target dev='hda' bus='virtio'/>
	I0315 23:59:58.922421  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </disk>
	I0315 23:59:58.922435  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <interface type='network'>
	I0315 23:59:58.922444  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <source network='mk-kubernetes-upgrade-209767'/>
	I0315 23:59:58.922451  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <model type='virtio'/>
	I0315 23:59:58.922456  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </interface>
	I0315 23:59:58.922463  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <interface type='network'>
	I0315 23:59:58.922470  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <source network='default'/>
	I0315 23:59:58.922477  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <model type='virtio'/>
	I0315 23:59:58.922482  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </interface>
	I0315 23:59:58.922489  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <serial type='pty'>
	I0315 23:59:58.922495  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <target port='0'/>
	I0315 23:59:58.922502  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </serial>
	I0315 23:59:58.922507  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <console type='pty'>
	I0315 23:59:58.922518  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <target type='serial' port='0'/>
	I0315 23:59:58.922529  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </console>
	I0315 23:59:58.922537  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     <rng model='virtio'>
	I0315 23:59:58.922544  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)       <backend model='random'>/dev/random</backend>
	I0315 23:59:58.922550  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     </rng>
	I0315 23:59:58.922558  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     
	I0315 23:59:58.922568  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)     
	I0315 23:59:58.922584  113182 main.go:141] libmachine: (kubernetes-upgrade-209767)   </devices>
	I0315 23:59:58.922592  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) </domain>
	I0315 23:59:58.922603  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) 
	I0315 23:59:58.929998  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:5d:9b:db in network default
	I0315 23:59:58.930635  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Ensuring networks are active...
	I0315 23:59:58.930661  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0315 23:59:58.931518  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Ensuring network default is active
	I0315 23:59:58.931920  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Ensuring network mk-kubernetes-upgrade-209767 is active
	I0315 23:59:58.932416  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Getting domain xml...
	I0315 23:59:58.933321  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Creating domain...
	I0316 00:00:00.147917  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Waiting to get IP...
	I0316 00:00:00.148610  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:00.149037  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:00.149087  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:00.149021  113243 retry.go:31] will retry after 201.605121ms: waiting for machine to come up
	I0316 00:00:00.352510  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:00.352973  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:00.352997  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:00.352943  113243 retry.go:31] will retry after 349.011504ms: waiting for machine to come up
	I0316 00:00:00.703669  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:00.704374  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:00.704486  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:00.704385  113243 retry.go:31] will retry after 359.796878ms: waiting for machine to come up
	I0316 00:00:01.066095  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:01.066666  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:01.066776  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:01.066689  113243 retry.go:31] will retry after 445.514006ms: waiting for machine to come up
	I0316 00:00:01.514490  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:01.515062  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:01.515092  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:01.515009  113243 retry.go:31] will retry after 634.758213ms: waiting for machine to come up
	I0316 00:00:02.151893  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:02.152280  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:02.152308  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:02.152219  113243 retry.go:31] will retry after 676.2859ms: waiting for machine to come up
	I0316 00:00:02.829729  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:02.830192  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:02.830235  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:02.830136  113243 retry.go:31] will retry after 950.848908ms: waiting for machine to come up
	I0316 00:00:03.782182  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:03.782662  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:03.782687  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:03.782618  113243 retry.go:31] will retry after 1.361048395s: waiting for machine to come up
	I0316 00:00:05.145967  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:05.146469  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:05.146518  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:05.146404  113243 retry.go:31] will retry after 1.638183037s: waiting for machine to come up
	I0316 00:00:06.787054  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:06.787450  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:06.787474  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:06.787412  113243 retry.go:31] will retry after 1.563059297s: waiting for machine to come up
	I0316 00:00:08.352967  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:08.353418  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:08.353452  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:08.353368  113243 retry.go:31] will retry after 2.836821238s: waiting for machine to come up
	I0316 00:00:11.192335  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:11.192761  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:11.192790  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:11.192708  113243 retry.go:31] will retry after 3.12526551s: waiting for machine to come up
	I0316 00:00:14.319314  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:14.319774  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:14.319799  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:14.319728  113243 retry.go:31] will retry after 3.242887093s: waiting for machine to come up
	I0316 00:00:17.567234  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:17.567758  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:00:17.567782  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:00:17.567691  113243 retry.go:31] will retry after 4.991327006s: waiting for machine to come up
	I0316 00:00:22.563734  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.564157  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Found IP for machine: 192.168.39.143
	I0316 00:00:22.564199  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has current primary IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.564213  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Reserving static IP address...
	I0316 00:00:22.564523  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-209767", mac: "52:54:00:59:2d:2b", ip: "192.168.39.143"} in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.636196  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Getting to WaitForSSH function...
	I0316 00:00:22.636228  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Reserved static IP address: 192.168.39.143
	I0316 00:00:22.636243  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Waiting for SSH to be available...
	I0316 00:00:22.638531  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.638843  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:minikube Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:22.638878  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.639032  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Using SSH client type: external
	I0316 00:00:22.639064  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa (-rw-------)
	I0316 00:00:22.639098  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.143 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:00:22.639118  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | About to run SSH command:
	I0316 00:00:22.639136  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | exit 0
	I0316 00:00:22.767078  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | SSH cmd err, output: <nil>: 
	I0316 00:00:22.767386  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) KVM machine creation complete!
	I0316 00:00:22.767780  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetConfigRaw
	I0316 00:00:22.768419  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:22.768617  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:22.768763  113182 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0316 00:00:22.768784  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetState
	I0316 00:00:22.770202  113182 main.go:141] libmachine: Detecting operating system of created instance...
	I0316 00:00:22.770220  113182 main.go:141] libmachine: Waiting for SSH to be available...
	I0316 00:00:22.770226  113182 main.go:141] libmachine: Getting to WaitForSSH function...
	I0316 00:00:22.770232  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:22.772570  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.772917  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:22.772958  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.773090  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:22.773240  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:22.773410  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:22.773558  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:22.773780  113182 main.go:141] libmachine: Using SSH client type: native
	I0316 00:00:22.774037  113182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0316 00:00:22.774050  113182 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0316 00:00:22.882599  113182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:00:22.882623  113182 main.go:141] libmachine: Detecting the provisioner...
	I0316 00:00:22.882631  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:22.885520  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.885843  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:22.885875  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.885989  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:22.886245  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:22.886417  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:22.886552  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:22.886708  113182 main.go:141] libmachine: Using SSH client type: native
	I0316 00:00:22.886877  113182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0316 00:00:22.886887  113182 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0316 00:00:22.996084  113182 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0316 00:00:22.996171  113182 main.go:141] libmachine: found compatible host: buildroot
	I0316 00:00:22.996185  113182 main.go:141] libmachine: Provisioning with buildroot...
	I0316 00:00:22.996195  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetMachineName
	I0316 00:00:22.996430  113182 buildroot.go:166] provisioning hostname "kubernetes-upgrade-209767"
	I0316 00:00:22.996459  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetMachineName
	I0316 00:00:22.996650  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:22.999331  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.999620  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:22.999645  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:22.999791  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:23.000014  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.000207  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.000368  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:23.000561  113182 main.go:141] libmachine: Using SSH client type: native
	I0316 00:00:23.000733  113182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0316 00:00:23.000746  113182 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-209767 && echo "kubernetes-upgrade-209767" | sudo tee /etc/hostname
	I0316 00:00:23.126717  113182 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-209767
	
	I0316 00:00:23.126753  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:23.129845  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.130262  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.130299  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.130464  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:23.130665  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.130851  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.131042  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:23.131247  113182 main.go:141] libmachine: Using SSH client type: native
	I0316 00:00:23.131466  113182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0316 00:00:23.131487  113182 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-209767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-209767/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-209767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:00:23.254610  113182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:00:23.254639  113182 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:00:23.254657  113182 buildroot.go:174] setting up certificates
	I0316 00:00:23.254665  113182 provision.go:84] configureAuth start
	I0316 00:00:23.254682  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetMachineName
	I0316 00:00:23.254990  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetIP
	I0316 00:00:23.257907  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.258265  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.258296  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.258475  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:23.260846  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.261113  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.261141  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.261215  113182 provision.go:143] copyHostCerts
	I0316 00:00:23.261310  113182 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:00:23.261321  113182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:00:23.261394  113182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:00:23.261513  113182 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:00:23.261525  113182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:00:23.261553  113182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:00:23.261618  113182 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:00:23.261625  113182 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:00:23.261655  113182 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:00:23.261722  113182 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-209767 san=[127.0.0.1 192.168.39.143 kubernetes-upgrade-209767 localhost minikube]
	I0316 00:00:23.436474  113182 provision.go:177] copyRemoteCerts
	I0316 00:00:23.436559  113182 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:00:23.436607  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:23.439124  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.439376  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.439418  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.439545  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:23.439735  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.439900  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:23.440044  113182 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa Username:docker}
	I0316 00:00:23.525286  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:00:23.550052  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0316 00:00:23.574571  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:00:23.598656  113182 provision.go:87] duration metric: took 343.979035ms to configureAuth
	I0316 00:00:23.598682  113182 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:00:23.598868  113182 config.go:182] Loaded profile config "kubernetes-upgrade-209767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:00:23.598955  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:23.601459  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.601890  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.601924  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.602177  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:23.602400  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.602569  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.602700  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:23.602864  113182 main.go:141] libmachine: Using SSH client type: native
	I0316 00:00:23.603058  113182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0316 00:00:23.603072  113182 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:00:23.876338  113182 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:00:23.876365  113182 main.go:141] libmachine: Checking connection to Docker...
	I0316 00:00:23.876374  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetURL
	I0316 00:00:23.877627  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | Using libvirt version 6000000
	I0316 00:00:23.879674  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.879983  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.880004  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.880167  113182 main.go:141] libmachine: Docker is up and running!
	I0316 00:00:23.880184  113182 main.go:141] libmachine: Reticulating splines...
	I0316 00:00:23.880192  113182 client.go:171] duration metric: took 25.373619175s to LocalClient.Create
	I0316 00:00:23.880235  113182 start.go:167] duration metric: took 25.373690187s to libmachine.API.Create "kubernetes-upgrade-209767"
	I0316 00:00:23.880257  113182 start.go:293] postStartSetup for "kubernetes-upgrade-209767" (driver="kvm2")
	I0316 00:00:23.880273  113182 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:00:23.880294  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:23.880550  113182 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:00:23.880578  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:23.882689  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.883031  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:23.883059  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:23.883159  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:23.883340  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:23.883496  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:23.883609  113182 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa Username:docker}
	I0316 00:00:23.969642  113182 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:00:23.974023  113182 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:00:23.974051  113182 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:00:23.974110  113182 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:00:23.974178  113182 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:00:23.974272  113182 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:00:23.984986  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:00:24.009726  113182 start.go:296] duration metric: took 129.452433ms for postStartSetup
	I0316 00:00:24.009780  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetConfigRaw
	I0316 00:00:24.010359  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetIP
	I0316 00:00:24.012955  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.013347  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:24.013386  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.013741  113182 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/config.json ...
	I0316 00:00:24.013980  113182 start.go:128] duration metric: took 25.526481102s to createHost
	I0316 00:00:24.014011  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:24.016340  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.016671  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:24.016702  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.016809  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:24.016982  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:24.017158  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:24.017265  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:24.017455  113182 main.go:141] libmachine: Using SSH client type: native
	I0316 00:00:24.017617  113182 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.143 22 <nil> <nil>}
	I0316 00:00:24.017632  113182 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0316 00:00:24.128197  113182 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710547224.104390830
	
	I0316 00:00:24.128230  113182 fix.go:216] guest clock: 1710547224.104390830
	I0316 00:00:24.128241  113182 fix.go:229] Guest: 2024-03-16 00:00:24.10439083 +0000 UTC Remote: 2024-03-16 00:00:24.013995089 +0000 UTC m=+25.677141589 (delta=90.395741ms)
	I0316 00:00:24.128270  113182 fix.go:200] guest clock delta is within tolerance: 90.395741ms
	I0316 00:00:24.128276  113182 start.go:83] releasing machines lock for "kubernetes-upgrade-209767", held for 25.640869924s
	I0316 00:00:24.128304  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:24.128624  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetIP
	I0316 00:00:24.131717  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.132129  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:24.132176  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.132268  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:24.132832  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:24.133066  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .DriverName
	I0316 00:00:24.133166  113182 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:00:24.133219  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:24.133288  113182 ssh_runner.go:195] Run: cat /version.json
	I0316 00:00:24.133312  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHHostname
	I0316 00:00:24.136025  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.136265  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.136442  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:24.136471  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.136648  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:24.136696  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:24.136748  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:24.136860  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:24.136911  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHPort
	I0316 00:00:24.137046  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:24.137119  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHKeyPath
	I0316 00:00:24.137200  113182 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa Username:docker}
	I0316 00:00:24.137302  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetSSHUsername
	I0316 00:00:24.137445  113182 sshutil.go:53] new ssh client: &{IP:192.168.39.143 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kubernetes-upgrade-209767/id_rsa Username:docker}
	I0316 00:00:24.248523  113182 ssh_runner.go:195] Run: systemctl --version
	I0316 00:00:24.255904  113182 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:00:24.424833  113182 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:00:24.431029  113182 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:00:24.431112  113182 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:00:24.447888  113182 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:00:24.447918  113182 start.go:494] detecting cgroup driver to use...
	I0316 00:00:24.447992  113182 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:00:24.466828  113182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:00:24.482709  113182 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:00:24.482771  113182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:00:24.497745  113182 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:00:24.512012  113182 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:00:24.624578  113182 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:00:24.774803  113182 docker.go:233] disabling docker service ...
	I0316 00:00:24.774879  113182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:00:24.794309  113182 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:00:24.809225  113182 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:00:24.955399  113182 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:00:25.081946  113182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:00:25.096914  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:00:25.117705  113182 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:00:25.117777  113182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:00:25.130288  113182 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:00:25.130362  113182 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:00:25.143302  113182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:00:25.155485  113182 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:00:25.166911  113182 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:00:25.178430  113182 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:00:25.188863  113182 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:00:25.188911  113182 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:00:25.204034  113182 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:00:25.214322  113182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:00:25.335581  113182 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:00:25.493418  113182 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:00:25.493507  113182 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:00:25.498525  113182 start.go:562] Will wait 60s for crictl version
	I0316 00:00:25.498574  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:25.502565  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:00:25.544827  113182 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:00:25.544925  113182 ssh_runner.go:195] Run: crio --version
	I0316 00:00:25.582041  113182 ssh_runner.go:195] Run: crio --version
	I0316 00:00:25.616359  113182 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:00:25.617762  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) Calling .GetIP
	I0316 00:00:25.620855  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:25.621274  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:59:2d:2b", ip: ""} in network mk-kubernetes-upgrade-209767: {Iface:virbr1 ExpiryTime:2024-03-16 01:00:13 +0000 UTC Type:0 Mac:52:54:00:59:2d:2b Iaid: IPaddr:192.168.39.143 Prefix:24 Hostname:kubernetes-upgrade-209767 Clientid:01:52:54:00:59:2d:2b}
	I0316 00:00:25.621298  113182 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined IP address 192.168.39.143 and MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:00:25.621523  113182 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:00:25.625977  113182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:00:25.640310  113182 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-209767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-209767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:00:25.640453  113182 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:00:25.640529  113182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:00:25.676807  113182 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:00:25.676874  113182 ssh_runner.go:195] Run: which lz4
	I0316 00:00:25.681402  113182 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0316 00:00:25.685769  113182 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:00:25.685801  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:00:27.543253  113182 crio.go:444] duration metric: took 1.861911539s to copy over tarball
	I0316 00:00:27.543375  113182 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:00:30.239556  113182 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.696136767s)
	I0316 00:00:30.239592  113182 crio.go:451] duration metric: took 2.696304012s to extract the tarball
	I0316 00:00:30.239599  113182 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:00:30.292332  113182 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:00:30.338556  113182 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:00:30.338593  113182 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:00:30.338709  113182 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:00:30.338744  113182 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:00:30.338745  113182 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:00:30.338705  113182 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:00:30.338745  113182 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:00:30.338678  113182 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:00:30.338684  113182 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:00:30.338874  113182 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:00:30.340399  113182 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:00:30.340408  113182 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:00:30.340436  113182 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:00:30.340461  113182 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:00:30.340477  113182 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:00:30.340400  113182 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:00:30.340407  113182 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:00:30.340621  113182 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:00:30.493692  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:00:30.493692  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:00:30.494658  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:00:30.507150  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:00:30.521694  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:00:30.528476  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:00:30.587988  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:00:30.610879  113182 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:00:30.610989  113182 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:00:30.611041  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.611057  113182 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:00:30.611095  113182 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:00:30.611143  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.617773  113182 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:00:30.617833  113182 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:00:30.617893  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.643525  113182 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:00:30.643581  113182 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:00:30.643642  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.677569  113182 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:00:30.677626  113182 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:00:30.677646  113182 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:00:30.677675  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.677685  113182 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:00:30.677732  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.699866  113182 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:00:30.699919  113182 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:00:30.699963  113182 ssh_runner.go:195] Run: which crictl
	I0316 00:00:30.699967  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:00:30.699982  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:00:30.700039  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:00:30.700091  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:00:30.700138  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:00:30.700151  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:00:30.831809  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:00:30.831944  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:00:30.831961  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:00:30.836133  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:00:30.836170  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:00:30.836231  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:00:30.836255  113182 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:00:30.872478  113182 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:00:31.075090  113182 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:00:31.214709  113182 cache_images.go:92] duration metric: took 876.097887ms to LoadCachedImages
	W0316 00:00:31.214808  113182 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0316 00:00:31.214821  113182 kubeadm.go:928] updating node { 192.168.39.143 8443 v1.20.0 crio true true} ...
	I0316 00:00:31.214962  113182 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-209767 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.143
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-209767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:00:31.215099  113182 ssh_runner.go:195] Run: crio config
	I0316 00:00:31.265487  113182 cni.go:84] Creating CNI manager for ""
	I0316 00:00:31.265513  113182 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:00:31.265525  113182 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:00:31.265543  113182 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.143 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-209767 NodeName:kubernetes-upgrade-209767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.143"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.143 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:00:31.265707  113182 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.143
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-209767"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.143
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.143"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:00:31.265787  113182 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:00:31.276712  113182 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:00:31.276795  113182 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:00:31.287819  113182 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0316 00:00:31.307115  113182 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:00:31.327611  113182 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0316 00:00:31.349264  113182 ssh_runner.go:195] Run: grep 192.168.39.143	control-plane.minikube.internal$ /etc/hosts
	I0316 00:00:31.354035  113182 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.143	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:00:31.368126  113182 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:00:31.486365  113182 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:00:31.505362  113182 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767 for IP: 192.168.39.143
	I0316 00:00:31.505389  113182 certs.go:194] generating shared ca certs ...
	I0316 00:00:31.505404  113182 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:31.505615  113182 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:00:31.505659  113182 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:00:31.505673  113182 certs.go:256] generating profile certs ...
	I0316 00:00:31.505773  113182 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/client.key
	I0316 00:00:31.505793  113182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/client.crt with IP's: []
	I0316 00:00:31.603942  113182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/client.crt ...
	I0316 00:00:31.603973  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/client.crt: {Name:mk8a5c81108a916b2365528aef4facecb504537b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:31.604164  113182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/client.key ...
	I0316 00:00:31.604186  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/client.key: {Name:mk6e1d3116f92729fccf7011444ac902199d7c97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:31.604285  113182 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.key.ae43b307
	I0316 00:00:31.604301  113182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.crt.ae43b307 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.143]
	I0316 00:00:31.845735  113182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.crt.ae43b307 ...
	I0316 00:00:31.845765  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.crt.ae43b307: {Name:mk4f2793b5fc539fc676b56e4bfe73e24bfa5775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:31.845925  113182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.key.ae43b307 ...
	I0316 00:00:31.845943  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.key.ae43b307: {Name:mk7dbd50a791909c6cf49b79447adf874158156b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:31.846013  113182 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.crt.ae43b307 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.crt
	I0316 00:00:31.846083  113182 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.key.ae43b307 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.key
	I0316 00:00:31.846138  113182 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.key
	I0316 00:00:31.846153  113182 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.crt with IP's: []
	I0316 00:00:32.152790  113182 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.crt ...
	I0316 00:00:32.152831  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.crt: {Name:mka8b4614bc03ee4a7622357ddbf237e7a9efac4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:32.153020  113182 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.key ...
	I0316 00:00:32.153039  113182 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.key: {Name:mkbb81ed1dc07ce92e409461debc650e74e92ff8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:00:32.153219  113182 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:00:32.153260  113182 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:00:32.153273  113182 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:00:32.153297  113182 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:00:32.153326  113182 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:00:32.153362  113182 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:00:32.153413  113182 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:00:32.154137  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:00:32.184016  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:00:32.210503  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:00:32.242114  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:00:32.268166  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0316 00:00:32.293622  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:00:32.324024  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:00:32.351299  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kubernetes-upgrade-209767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:00:32.387611  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:00:32.423563  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:00:32.448262  113182 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:00:32.472903  113182 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:00:32.490671  113182 ssh_runner.go:195] Run: openssl version
	I0316 00:00:32.496697  113182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:00:32.508380  113182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:00:32.512961  113182 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:00:32.513019  113182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:00:32.518768  113182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:00:32.530243  113182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:00:32.542069  113182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:00:32.547572  113182 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:00:32.547619  113182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:00:32.554213  113182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:00:32.566132  113182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:00:32.577803  113182 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:00:32.582342  113182 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:00:32.582393  113182 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:00:32.588096  113182 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:00:32.599710  113182 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:00:32.604050  113182 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 00:00:32.604137  113182 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-209767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-209767 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.143 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:00:32.604243  113182 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:00:32.604300  113182 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:00:32.642418  113182 cri.go:89] found id: ""
	I0316 00:00:32.642517  113182 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 00:00:32.653658  113182 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:00:32.664470  113182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:00:32.675088  113182 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:00:32.675113  113182 kubeadm.go:156] found existing configuration files:
	
	I0316 00:00:32.675169  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:00:32.685304  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:00:32.685376  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:00:32.696058  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:00:32.705948  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:00:32.706010  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:00:32.716580  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:00:32.726536  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:00:32.726599  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:00:32.737161  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:00:32.747453  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:00:32.747536  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:00:32.758242  113182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:00:32.888693  113182 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:00:32.889293  113182 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:00:33.054137  113182 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:00:33.054292  113182 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:00:33.054440  113182 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:00:33.295486  113182 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:00:33.298662  113182 out.go:204]   - Generating certificates and keys ...
	I0316 00:00:33.298776  113182 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:00:33.298869  113182 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:00:33.369730  113182 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 00:00:33.453952  113182 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 00:00:33.616266  113182 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 00:00:33.860407  113182 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 00:00:33.961505  113182 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 00:00:33.961658  113182 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-209767 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	I0316 00:00:34.111888  113182 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 00:00:34.112143  113182 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-209767 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	I0316 00:00:34.461593  113182 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 00:00:34.575623  113182 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 00:00:34.653120  113182 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 00:00:34.653434  113182 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:00:35.000922  113182 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:00:35.188986  113182 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:00:35.274714  113182 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:00:35.465095  113182 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:00:35.483909  113182 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:00:35.486863  113182 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:00:35.486956  113182 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:00:35.621763  113182 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:00:35.625347  113182 out.go:204]   - Booting up control plane ...
	I0316 00:00:35.625479  113182 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:00:35.631223  113182 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:00:35.632221  113182 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:00:35.633067  113182 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:00:35.637398  113182 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:01:15.631884  113182 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:01:15.633341  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:01:15.633594  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:01:20.634006  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:01:20.634276  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:01:30.633670  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:01:30.633923  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:01:50.633460  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:01:50.633752  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:02:30.635910  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:02:30.636223  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:02:30.636241  113182 kubeadm.go:309] 
	I0316 00:02:30.636310  113182 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:02:30.636362  113182 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:02:30.636373  113182 kubeadm.go:309] 
	I0316 00:02:30.636425  113182 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:02:30.636474  113182 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:02:30.636607  113182 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:02:30.636621  113182 kubeadm.go:309] 
	I0316 00:02:30.636742  113182 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:02:30.636795  113182 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:02:30.636844  113182 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:02:30.636858  113182 kubeadm.go:309] 
	I0316 00:02:30.636984  113182 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:02:30.637122  113182 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:02:30.637147  113182 kubeadm.go:309] 
	I0316 00:02:30.637304  113182 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:02:30.637434  113182 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:02:30.637548  113182 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:02:30.637651  113182 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:02:30.637672  113182 kubeadm.go:309] 
	I0316 00:02:30.637972  113182 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:02:30.638091  113182 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:02:30.638228  113182 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:02:30.638340  113182 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-209767 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-209767 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-209767 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-209767 localhost] and IPs [192.168.39.143 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:02:30.638399  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:02:32.057050  113182 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.41860462s)
	I0316 00:02:32.057128  113182 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:02:32.072113  113182 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:02:32.082507  113182 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:02:32.082533  113182 kubeadm.go:156] found existing configuration files:
	
	I0316 00:02:32.082590  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:02:32.092453  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:02:32.092518  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:02:32.102894  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:02:32.112958  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:02:32.113018  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:02:32.123336  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:02:32.132999  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:02:32.133060  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:02:32.143811  113182 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:02:32.153894  113182 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:02:32.153959  113182 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:02:32.163605  113182 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:02:32.398359  113182 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:04:28.423450  113182 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:04:28.423565  113182 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:04:28.425875  113182 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:04:28.425958  113182 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:04:28.426067  113182 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:04:28.426202  113182 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:04:28.426355  113182 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:04:28.426458  113182 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:04:28.428484  113182 out.go:204]   - Generating certificates and keys ...
	I0316 00:04:28.428589  113182 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:04:28.428689  113182 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:04:28.428827  113182 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:04:28.428925  113182 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:04:28.429032  113182 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:04:28.429118  113182 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:04:28.429237  113182 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:04:28.429312  113182 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:04:28.429405  113182 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:04:28.429510  113182 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:04:28.429563  113182 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:04:28.429638  113182 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:04:28.429713  113182 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:04:28.429785  113182 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:04:28.429869  113182 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:04:28.429943  113182 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:04:28.430069  113182 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:04:28.430178  113182 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:04:28.430236  113182 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:04:28.430345  113182 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:04:28.431881  113182 out.go:204]   - Booting up control plane ...
	I0316 00:04:28.431971  113182 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:04:28.432040  113182 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:04:28.432139  113182 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:04:28.432247  113182 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:04:28.432431  113182 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:04:28.432504  113182 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:04:28.432597  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:04:28.432799  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:04:28.432882  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:04:28.433135  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:04:28.433233  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:04:28.433481  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:04:28.433572  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:04:28.433804  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:04:28.433884  113182 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:04:28.434118  113182 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:04:28.434131  113182 kubeadm.go:309] 
	I0316 00:04:28.434190  113182 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:04:28.434241  113182 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:04:28.434256  113182 kubeadm.go:309] 
	I0316 00:04:28.434313  113182 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:04:28.434365  113182 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:04:28.434519  113182 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:04:28.434537  113182 kubeadm.go:309] 
	I0316 00:04:28.434695  113182 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:04:28.434752  113182 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:04:28.434791  113182 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:04:28.434798  113182 kubeadm.go:309] 
	I0316 00:04:28.434881  113182 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:04:28.434955  113182 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:04:28.434962  113182 kubeadm.go:309] 
	I0316 00:04:28.435087  113182 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:04:28.435165  113182 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:04:28.435228  113182 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:04:28.435335  113182 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:04:28.435412  113182 kubeadm.go:309] 
	I0316 00:04:28.435428  113182 kubeadm.go:393] duration metric: took 3m55.831295972s to StartCluster
	I0316 00:04:28.435493  113182 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:04:28.435575  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:04:28.493924  113182 cri.go:89] found id: ""
	I0316 00:04:28.493957  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.493970  113182 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:04:28.493979  113182 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:04:28.494065  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:04:28.548506  113182 cri.go:89] found id: ""
	I0316 00:04:28.548541  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.548551  113182 logs.go:278] No container was found matching "etcd"
	I0316 00:04:28.548557  113182 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:04:28.548618  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:04:28.590969  113182 cri.go:89] found id: ""
	I0316 00:04:28.591003  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.591015  113182 logs.go:278] No container was found matching "coredns"
	I0316 00:04:28.591023  113182 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:04:28.591081  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:04:28.644041  113182 cri.go:89] found id: ""
	I0316 00:04:28.644084  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.644097  113182 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:04:28.644106  113182 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:04:28.644181  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:04:28.695914  113182 cri.go:89] found id: ""
	I0316 00:04:28.695946  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.695958  113182 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:04:28.695967  113182 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:04:28.696038  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:04:28.738184  113182 cri.go:89] found id: ""
	I0316 00:04:28.738218  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.738230  113182 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:04:28.738239  113182 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:04:28.738310  113182 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:04:28.786675  113182 cri.go:89] found id: ""
	I0316 00:04:28.786708  113182 logs.go:276] 0 containers: []
	W0316 00:04:28.786719  113182 logs.go:278] No container was found matching "kindnet"
	I0316 00:04:28.786732  113182 logs.go:123] Gathering logs for kubelet ...
	I0316 00:04:28.786752  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:04:28.845731  113182 logs.go:123] Gathering logs for dmesg ...
	I0316 00:04:28.845766  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:04:28.868938  113182 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:04:28.868981  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:04:29.042428  113182 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:04:29.042459  113182 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:04:29.042476  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:04:29.174798  113182 logs.go:123] Gathering logs for container status ...
	I0316 00:04:29.174851  113182 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:04:29.219221  113182 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:04:29.219282  113182 out.go:239] * 
	* 
	W0316 00:04:29.219374  113182 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:04:29.219409  113182 out.go:239] * 
	* 
	W0316 00:04:29.220512  113182 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:04:29.223694  113182 out.go:177] 
	W0316 00:04:29.225313  113182 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:04:29.225384  113182 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:04:29.225416  113182 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:04:29.228169  113182 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-209767
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-209767: (2.743522578s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-209767 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-209767 status --format={{.Host}}: exit status 7 (99.938213ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (55.808081812s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-209767 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (111.671254ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-209767] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-209767
	    minikube start -p kubernetes-upgrade-209767 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2097672 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-209767 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-209767 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (50.98183477s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-16 00:06:19.134643405 +0000 UTC m=+4209.557494682
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-209767 -n kubernetes-upgrade-209767
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-209767 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-209767 logs -n 25: (1.734433038s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-869135 sudo                  | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat              | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat              | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                  | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                  | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                  | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo find             | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo crio             | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-869135                       | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:03 UTC |
	| delete  | -p force-systemd-env-380757            | force-systemd-env-380757  | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:03 UTC |
	| start   | -p stopped-upgrade-684927              | minikube                  | jenkins | v1.26.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:04 UTC |
	|         | --memory=2200 --vm-driver=kvm2         |                           |         |         |                     |                     |
	|         |  --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p cert-expiration-982877              | cert-expiration-982877    | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:05 UTC |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-209767           | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:04 UTC |
	| start   | -p kubernetes-upgrade-209767           | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:05 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-684927 stop            | minikube                  | jenkins | v1.26.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:04 UTC |
	| start   | -p stopped-upgrade-684927              | stopped-upgrade-684927    | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:05 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p pause-033460                        | pause-033460              | jenkins | v1.32.0 | 16 Mar 24 00:05 UTC | 16 Mar 24 00:05 UTC |
	| start   | -p force-systemd-flag-844359           | force-systemd-flag-844359 | jenkins | v1.32.0 | 16 Mar 24 00:05 UTC | 16 Mar 24 00:06 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-209767           | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:05 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-209767           | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:05 UTC | 16 Mar 24 00:06 UTC |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2      |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-684927              | stopped-upgrade-684927    | jenkins | v1.32.0 | 16 Mar 24 00:05 UTC | 16 Mar 24 00:05 UTC |
	| start   | -p cert-options-313368                 | cert-options-313368       | jenkins | v1.32.0 | 16 Mar 24 00:05 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1              |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15          |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost            |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com       |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-844359 ssh cat      | force-systemd-flag-844359 | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-844359           | force-systemd-flag-844359 | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p old-k8s-version-402923              | old-k8s-version-402923    | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true          |                           |         |         |                     |                     |
	|         | --kvm-network=default                  |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system          |                           |         |         |                     |                     |
	|         | --disable-driver-mounts                |                           |         |         |                     |                     |
	|         | --keep-context=false                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:06:19
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:06:19.154878  120517 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:06:19.154991  120517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:06:19.155000  120517 out.go:304] Setting ErrFile to fd 2...
	I0316 00:06:19.155004  120517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:06:19.155208  120517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:06:19.155928  120517 out.go:298] Setting JSON to false
	I0316 00:06:19.157019  120517 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10129,"bootTime":1710537450,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:06:19.157082  120517 start.go:139] virtualization: kvm guest
	I0316 00:06:19.159444  120517 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:06:19.160819  120517 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:06:19.161016  120517 notify.go:220] Checking for updates...
	I0316 00:06:19.162113  120517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:06:19.163255  120517 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:06:19.164468  120517 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:06:19.165763  120517 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:06:19.166985  120517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:06:19.168785  120517 config.go:182] Loaded profile config "cert-expiration-982877": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:06:19.168932  120517 config.go:182] Loaded profile config "cert-options-313368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:06:19.169044  120517 config.go:182] Loaded profile config "kubernetes-upgrade-209767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:06:19.169173  120517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:06:19.210267  120517 out.go:177] * Using the kvm2 driver based on user configuration
	I0316 00:06:19.211559  120517 start.go:297] selected driver: kvm2
	I0316 00:06:19.211579  120517 start.go:901] validating driver "kvm2" against <nil>
	I0316 00:06:19.211594  120517 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:06:19.212656  120517 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:06:19.212744  120517 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:06:19.227761  120517 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:06:19.227802  120517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 00:06:19.228005  120517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:06:19.228065  120517 cni.go:84] Creating CNI manager for ""
	I0316 00:06:19.228077  120517 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:06:19.228085  120517 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 00:06:19.228140  120517 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:06:19.228225  120517 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:06:19.229959  120517 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	
	
	==> CRI-O <==
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.842557119Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547579842448074,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=05214b32-5ace-4736-9464-fbe55aec994b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.843455252Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5298792e-c25d-4d5d-9fae-f3d585e2e0ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.843566812Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5298792e-c25d-4d5d-9fae-f3d585e2e0ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.843868063Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:264a8bc433cd69bbd9425b4461d77afcf424245b0b8a9f8e3c5547b4f1634207,PodSandboxId:8eb0af6f4568a87e538b4aff2f1e743bd731bee47bd06dc8844295af24038f60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710547576207199480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8b487d0f4d2f49a19e5b3ddc0f3a10612caf79d580598be77985551f6795b,PodSandboxId:bedbda758ba9314b381c262004313dc15b6137d46f7a4a35e86a1aa4cdbee094,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710547576161188433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8136055b78d28b07df9c3d12b076f3fcb8afedd54f020e516a40d96721c86bfb,PodSandboxId:3127444813a6e05d54db3f70dfafae98e94ee7d4f3bf16b62afb8c5a32cd3af7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576210191000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd59663450dd8b07f5fe55336b8775a71b4b0c7f1612c97f9a9e11872db4d13,PodSandboxId:bca7050e197ca6a7732201ab1cf0d0ad3bbfbaf4a8993541695cb09e20ed47c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576159615199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-81
9e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b338e519cac28baa70c19c133646c6087e28e9f0337c3d9ccee4d68e8c7b8d8,PodSandboxId:7582ac48e8ab1a86e47dc60e0d743bcf0eb22b18ef73229d17064727ea55f4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710547572501198042,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92e3f6cf36de78376231dd5afbfe906b043f90641a151ee3005a011ee60fc83,PodSandboxId:10e5a655d5faf66b0f9d4333a35a2660f6186b3116965a33c1babefcaba6774f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710547572498047708,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2358d22d7ee6812a10715bdd240b57a6bb8269245908bd3318244a59fe7e69a0,PodSandboxId:94090803259ee19f0507de836d81b9bcbbb06f9e9b278b56980962209bb77a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710547572472088283,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbedf67f538f496fa6d2fce235ee9e8985f04bd056d1541d8a08bdd6ae1e6a0,PodSandboxId:4808aa092d6beff4cb02bca0b697eeb0fa1dddc4ec30a5a8d88ea64c33243069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710547572486622002,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9406a701f064aa45a76fd6ba8666dc35b32966b6cc37290c3b6fc7ce277ff7,PodSandboxId:d2348b2431b933ee3c0e2aea0acb89c30d84c13717824e432297df7cc854d1d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710547566412687692,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7bd1cbb5ac6e19aea0725db266056bd0abd3c9d118b66812de960132c4e719,PodSandboxId:ac6058cd0be6c2d3febcd67a19c6b0797e909e036383d50dea3003a56925e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710547566270470801,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290f367ad18d79635b85052552cfa66814c10ee77a8c1842d1991eb84df7829,PodSandboxId:f416aa0e5fc477f2c920912d8be4c310dd55c4771d945bd901e2b70d8210c2f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547567065311658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-819e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ec61f64a4fb21999f78063484671982d40b321fd0b20cf01e904abf1830e0a,PodSandboxId:065d48d8722e68d6b781fb6fe3587cfbf7c18659588179045a37dbadd0ed3dc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547566592723359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f02bdc6b71ce962c603d180e0bf4b4a19e5b76122c63d26a7ff534f22fb5e89,PodSandboxId:af884a62f8421f437a15c5a373d9aed5e9091619f18cbd160bb9f9fbd64dce24,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710547566456027171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea4b98c368683e5a024e02bf207f9211a8df3a9782a64c7763538141c2a3cd2,PodSandboxId:8737c612e7fb034c249e0212f62c9973d205134724cf02077c54102d9f407311,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710547566237320574,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c0574d45f2bc15cf4d21edf4001c0802256790092d9f27c5c226ace1634057,PodSandboxId:694c064fb98e295a4e82643751b3b2c573f25d34c0c9f4ad12ae89670f3c606e,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710547566089721700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253b1bc04dee2730f9f0ee4d4b81973c0b7ab37e274780cebd90305ff355909f,PodSandboxId:56248ba9b25fa949395b1ebee9c31ecb1ba030dcf0679f9b1c856dfd42fa5a28,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710547565914839181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5298792e-c25d-4d5d-9fae-f3d585e2e0ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.888884435Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a034c0a5-3163-46c7-b3c1-f9a4f8870ed4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.888954926Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a034c0a5-3163-46c7-b3c1-f9a4f8870ed4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.890037591Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1797262f-2d7f-4954-9f56-357d8277b65d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.890604452Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547579890576101,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1797262f-2d7f-4954-9f56-357d8277b65d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.891057617Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0f329cfd-0ef1-47ec-afa2-578922d4dd86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.891135180Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0f329cfd-0ef1-47ec-afa2-578922d4dd86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.891474223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:264a8bc433cd69bbd9425b4461d77afcf424245b0b8a9f8e3c5547b4f1634207,PodSandboxId:8eb0af6f4568a87e538b4aff2f1e743bd731bee47bd06dc8844295af24038f60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710547576207199480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8b487d0f4d2f49a19e5b3ddc0f3a10612caf79d580598be77985551f6795b,PodSandboxId:bedbda758ba9314b381c262004313dc15b6137d46f7a4a35e86a1aa4cdbee094,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710547576161188433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8136055b78d28b07df9c3d12b076f3fcb8afedd54f020e516a40d96721c86bfb,PodSandboxId:3127444813a6e05d54db3f70dfafae98e94ee7d4f3bf16b62afb8c5a32cd3af7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576210191000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd59663450dd8b07f5fe55336b8775a71b4b0c7f1612c97f9a9e11872db4d13,PodSandboxId:bca7050e197ca6a7732201ab1cf0d0ad3bbfbaf4a8993541695cb09e20ed47c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576159615199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-81
9e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b338e519cac28baa70c19c133646c6087e28e9f0337c3d9ccee4d68e8c7b8d8,PodSandboxId:7582ac48e8ab1a86e47dc60e0d743bcf0eb22b18ef73229d17064727ea55f4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710547572501198042,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92e3f6cf36de78376231dd5afbfe906b043f90641a151ee3005a011ee60fc83,PodSandboxId:10e5a655d5faf66b0f9d4333a35a2660f6186b3116965a33c1babefcaba6774f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710547572498047708,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2358d22d7ee6812a10715bdd240b57a6bb8269245908bd3318244a59fe7e69a0,PodSandboxId:94090803259ee19f0507de836d81b9bcbbb06f9e9b278b56980962209bb77a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710547572472088283,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbedf67f538f496fa6d2fce235ee9e8985f04bd056d1541d8a08bdd6ae1e6a0,PodSandboxId:4808aa092d6beff4cb02bca0b697eeb0fa1dddc4ec30a5a8d88ea64c33243069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710547572486622002,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9406a701f064aa45a76fd6ba8666dc35b32966b6cc37290c3b6fc7ce277ff7,PodSandboxId:d2348b2431b933ee3c0e2aea0acb89c30d84c13717824e432297df7cc854d1d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710547566412687692,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7bd1cbb5ac6e19aea0725db266056bd0abd3c9d118b66812de960132c4e719,PodSandboxId:ac6058cd0be6c2d3febcd67a19c6b0797e909e036383d50dea3003a56925e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710547566270470801,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290f367ad18d79635b85052552cfa66814c10ee77a8c1842d1991eb84df7829,PodSandboxId:f416aa0e5fc477f2c920912d8be4c310dd55c4771d945bd901e2b70d8210c2f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547567065311658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-819e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ec61f64a4fb21999f78063484671982d40b321fd0b20cf01e904abf1830e0a,PodSandboxId:065d48d8722e68d6b781fb6fe3587cfbf7c18659588179045a37dbadd0ed3dc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547566592723359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f02bdc6b71ce962c603d180e0bf4b4a19e5b76122c63d26a7ff534f22fb5e89,PodSandboxId:af884a62f8421f437a15c5a373d9aed5e9091619f18cbd160bb9f9fbd64dce24,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710547566456027171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea4b98c368683e5a024e02bf207f9211a8df3a9782a64c7763538141c2a3cd2,PodSandboxId:8737c612e7fb034c249e0212f62c9973d205134724cf02077c54102d9f407311,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710547566237320574,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c0574d45f2bc15cf4d21edf4001c0802256790092d9f27c5c226ace1634057,PodSandboxId:694c064fb98e295a4e82643751b3b2c573f25d34c0c9f4ad12ae89670f3c606e,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710547566089721700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253b1bc04dee2730f9f0ee4d4b81973c0b7ab37e274780cebd90305ff355909f,PodSandboxId:56248ba9b25fa949395b1ebee9c31ecb1ba030dcf0679f9b1c856dfd42fa5a28,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710547565914839181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0f329cfd-0ef1-47ec-afa2-578922d4dd86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.963917263Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ca87df01-d487-4e0b-9f35-3f4f7cc059c3 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.964015855Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ca87df01-d487-4e0b-9f35-3f4f7cc059c3 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.964923801Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=24bd8294-247c-43d5-94f1-278c995aa125 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.965350261Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547579965326896,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=24bd8294-247c-43d5-94f1-278c995aa125 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.965906418Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b251e70e-9e90-4f39-85cb-b76f63aeda58 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.966001414Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b251e70e-9e90-4f39-85cb-b76f63aeda58 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:19 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:19.966386085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:264a8bc433cd69bbd9425b4461d77afcf424245b0b8a9f8e3c5547b4f1634207,PodSandboxId:8eb0af6f4568a87e538b4aff2f1e743bd731bee47bd06dc8844295af24038f60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710547576207199480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8b487d0f4d2f49a19e5b3ddc0f3a10612caf79d580598be77985551f6795b,PodSandboxId:bedbda758ba9314b381c262004313dc15b6137d46f7a4a35e86a1aa4cdbee094,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710547576161188433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8136055b78d28b07df9c3d12b076f3fcb8afedd54f020e516a40d96721c86bfb,PodSandboxId:3127444813a6e05d54db3f70dfafae98e94ee7d4f3bf16b62afb8c5a32cd3af7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576210191000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd59663450dd8b07f5fe55336b8775a71b4b0c7f1612c97f9a9e11872db4d13,PodSandboxId:bca7050e197ca6a7732201ab1cf0d0ad3bbfbaf4a8993541695cb09e20ed47c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576159615199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-81
9e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b338e519cac28baa70c19c133646c6087e28e9f0337c3d9ccee4d68e8c7b8d8,PodSandboxId:7582ac48e8ab1a86e47dc60e0d743bcf0eb22b18ef73229d17064727ea55f4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710547572501198042,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92e3f6cf36de78376231dd5afbfe906b043f90641a151ee3005a011ee60fc83,PodSandboxId:10e5a655d5faf66b0f9d4333a35a2660f6186b3116965a33c1babefcaba6774f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710547572498047708,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2358d22d7ee6812a10715bdd240b57a6bb8269245908bd3318244a59fe7e69a0,PodSandboxId:94090803259ee19f0507de836d81b9bcbbb06f9e9b278b56980962209bb77a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710547572472088283,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbedf67f538f496fa6d2fce235ee9e8985f04bd056d1541d8a08bdd6ae1e6a0,PodSandboxId:4808aa092d6beff4cb02bca0b697eeb0fa1dddc4ec30a5a8d88ea64c33243069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710547572486622002,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9406a701f064aa45a76fd6ba8666dc35b32966b6cc37290c3b6fc7ce277ff7,PodSandboxId:d2348b2431b933ee3c0e2aea0acb89c30d84c13717824e432297df7cc854d1d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710547566412687692,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7bd1cbb5ac6e19aea0725db266056bd0abd3c9d118b66812de960132c4e719,PodSandboxId:ac6058cd0be6c2d3febcd67a19c6b0797e909e036383d50dea3003a56925e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710547566270470801,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290f367ad18d79635b85052552cfa66814c10ee77a8c1842d1991eb84df7829,PodSandboxId:f416aa0e5fc477f2c920912d8be4c310dd55c4771d945bd901e2b70d8210c2f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547567065311658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-819e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ec61f64a4fb21999f78063484671982d40b321fd0b20cf01e904abf1830e0a,PodSandboxId:065d48d8722e68d6b781fb6fe3587cfbf7c18659588179045a37dbadd0ed3dc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547566592723359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f02bdc6b71ce962c603d180e0bf4b4a19e5b76122c63d26a7ff534f22fb5e89,PodSandboxId:af884a62f8421f437a15c5a373d9aed5e9091619f18cbd160bb9f9fbd64dce24,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710547566456027171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea4b98c368683e5a024e02bf207f9211a8df3a9782a64c7763538141c2a3cd2,PodSandboxId:8737c612e7fb034c249e0212f62c9973d205134724cf02077c54102d9f407311,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710547566237320574,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c0574d45f2bc15cf4d21edf4001c0802256790092d9f27c5c226ace1634057,PodSandboxId:694c064fb98e295a4e82643751b3b2c573f25d34c0c9f4ad12ae89670f3c606e,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710547566089721700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253b1bc04dee2730f9f0ee4d4b81973c0b7ab37e274780cebd90305ff355909f,PodSandboxId:56248ba9b25fa949395b1ebee9c31ecb1ba030dcf0679f9b1c856dfd42fa5a28,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710547565914839181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b251e70e-9e90-4f39-85cb-b76f63aeda58 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.016577106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=361e51d6-40d5-441c-a34c-9570dcc99a56 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.016685463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=361e51d6-40d5-441c-a34c-9570dcc99a56 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.019993533Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=303a0e61-2201-4bb3-a283-440581ea98d7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.020381938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547580020356876,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=303a0e61-2201-4bb3-a283-440581ea98d7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.021447520Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4516409b-7799-4726-a0f0-3b43a41cdd8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.021574857Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4516409b-7799-4726-a0f0-3b43a41cdd8c name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:06:20 kubernetes-upgrade-209767 crio[2803]: time="2024-03-16 00:06:20.021950026Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:264a8bc433cd69bbd9425b4461d77afcf424245b0b8a9f8e3c5547b4f1634207,PodSandboxId:8eb0af6f4568a87e538b4aff2f1e743bd731bee47bd06dc8844295af24038f60,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710547576207199480,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 2,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e4d8b487d0f4d2f49a19e5b3ddc0f3a10612caf79d580598be77985551f6795b,PodSandboxId:bedbda758ba9314b381c262004313dc15b6137d46f7a4a35e86a1aa4cdbee094,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710547576161188433,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8136055b78d28b07df9c3d12b076f3fcb8afedd54f020e516a40d96721c86bfb,PodSandboxId:3127444813a6e05d54db3f70dfafae98e94ee7d4f3bf16b62afb8c5a32cd3af7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576210191000,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\
",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9bd59663450dd8b07f5fe55336b8775a71b4b0c7f1612c97f9a9e11872db4d13,PodSandboxId:bca7050e197ca6a7732201ab1cf0d0ad3bbfbaf4a8993541695cb09e20ed47c7,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710547576159615199,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-81
9e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b338e519cac28baa70c19c133646c6087e28e9f0337c3d9ccee4d68e8c7b8d8,PodSandboxId:7582ac48e8ab1a86e47dc60e0d743bcf0eb22b18ef73229d17064727ea55f4fa,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710547572501198042,Labels:map
[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f92e3f6cf36de78376231dd5afbfe906b043f90641a151ee3005a011ee60fc83,PodSandboxId:10e5a655d5faf66b0f9d4333a35a2660f6186b3116965a33c1babefcaba6774f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710547572498047708,Labels:map[string]string{io.kuberne
tes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2358d22d7ee6812a10715bdd240b57a6bb8269245908bd3318244a59fe7e69a0,PodSandboxId:94090803259ee19f0507de836d81b9bcbbb06f9e9b278b56980962209bb77a88,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710547572472088283,Labels:map[string]string{io.kub
ernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fbedf67f538f496fa6d2fce235ee9e8985f04bd056d1541d8a08bdd6ae1e6a0,PodSandboxId:4808aa092d6beff4cb02bca0b697eeb0fa1dddc4ec30a5a8d88ea64c33243069,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710547572486622002,Labels:map[string]
string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea9406a701f064aa45a76fd6ba8666dc35b32966b6cc37290c3b6fc7ce277ff7,PodSandboxId:d2348b2431b933ee3c0e2aea0acb89c30d84c13717824e432297df7cc854d1d0,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710547566412687692,Labels:map[string]s
tring{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d,},Annotations:map[string]string{io.kubernetes.container.hash: b1b35c2a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ed7bd1cbb5ac6e19aea0725db266056bd0abd3c9d118b66812de960132c4e719,PodSandboxId:ac6058cd0be6c2d3febcd67a19c6b0797e909e036383d50dea3003a56925e463,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710547566270470801,Labels:map[string]string{io.kubernetes.co
ntainer.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xc8zk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4c651ffe-df69-4a98-9285-21ef4517ac05,},Annotations:map[string]string{io.kubernetes.container.hash: c7abd392,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4290f367ad18d79635b85052552cfa66814c10ee77a8c1842d1991eb84df7829,PodSandboxId:f416aa0e5fc477f2c920912d8be4c310dd55c4771d945bd901e2b70d8210c2f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547567065311658,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.p
od.name: coredns-76f75df574-snqmp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: af87f494-089e-4d48-8171-819e4f84808b,},Annotations:map[string]string{io.kubernetes.container.hash: da59c288,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9ec61f64a4fb21999f78063484671982d40b321fd0b20cf01e904abf1830e0a,PodSandboxId:065d48d8722e68d6b781fb6fe3587cfbf7c18659588179045a37dbadd0ed3dc7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cb
b01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710547566592723359,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-v9vp7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8544c08a-ec13-4973-b035-6d6962b0b040,},Annotations:map[string]string{io.kubernetes.container.hash: 2f7983bf,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2f02bdc6b71ce962c603d180e0bf4b4a19e5b76122c63d26a7ff534f22fb5e89,PodSandboxId:af884a62f8421f437a15c5a373d9aed5e9091619f18cbd160bb9f9fbd64dce24,Metadata:&ContainerMetadata{
Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710547566456027171,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e4aa70b64950920e3ee96d3157acadc,},Annotations:map[string]string{io.kubernetes.container.hash: b6f74974,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ea4b98c368683e5a024e02bf207f9211a8df3a9782a64c7763538141c2a3cd2,PodSandboxId:8737c612e7fb034c249e0212f62c9973d205134724cf02077c54102d9f407311,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt
:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710547566237320574,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a32050c4d10d29636607f3432976cb95,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a0c0574d45f2bc15cf4d21edf4001c0802256790092d9f27c5c226ace1634057,PodSandboxId:694c064fb98e295a4e82643751b3b2c573f25d34c0c9f4ad12ae89670f3c606e,Metadata:&ContainerMetadata{Name:kube-apise
rver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710547566089721700,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bb280a7bd83dcdf9b42bdd8140f7a745,},Annotations:map[string]string{io.kubernetes.container.hash: ad86f738,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:253b1bc04dee2730f9f0ee4d4b81973c0b7ab37e274780cebd90305ff355909f,PodSandboxId:56248ba9b25fa949395b1ebee9c31ecb1ba030dcf0679f9b1c856dfd42fa5a28,Metadata:&ContainerMetadata{Name:kube-scheduler,A
ttempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710547565914839181,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-209767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0012a63e7f80b345a45984c7035f7c4e,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4516409b-7799-4726-a0f0-3b43a41cdd8c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	8136055b78d28       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   3127444813a6e       coredns-76f75df574-v9vp7
	264a8bc433cd6       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   3 seconds ago       Running             kube-proxy                2                   8eb0af6f4568a       kube-proxy-xc8zk
	e4d8b487d0f4d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   3 seconds ago       Running             storage-provisioner       2                   bedbda758ba93       storage-provisioner
	9bd59663450dd       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   bca7050e197ca       coredns-76f75df574-snqmp
	5b338e519cac2       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   7 seconds ago       Running             etcd                      2                   7582ac48e8ab1       etcd-kubernetes-upgrade-209767
	f92e3f6cf36de       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   7 seconds ago       Running             kube-apiserver            2                   10e5a655d5faf       kube-apiserver-kubernetes-upgrade-209767
	5fbedf67f538f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   7 seconds ago       Running             kube-scheduler            2                   4808aa092d6be       kube-scheduler-kubernetes-upgrade-209767
	2358d22d7ee68       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   7 seconds ago       Running             kube-controller-manager   2                   94090803259ee       kube-controller-manager-kubernetes-upgrade-209767
	4290f367ad18d       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Exited              coredns                   1                   f416aa0e5fc47       coredns-76f75df574-snqmp
	e9ec61f64a4fb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 seconds ago      Exited              coredns                   1                   065d48d8722e6       coredns-76f75df574-v9vp7
	2f02bdc6b71ce       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 seconds ago      Exited              etcd                      1                   af884a62f8421       etcd-kubernetes-upgrade-209767
	ea9406a701f06       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 seconds ago      Exited              storage-provisioner       1                   d2348b2431b93       storage-provisioner
	ed7bd1cbb5ac6       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 seconds ago      Exited              kube-proxy                1                   ac6058cd0be6c       kube-proxy-xc8zk
	7ea4b98c36868       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 seconds ago      Exited              kube-controller-manager   1                   8737c612e7fb0       kube-controller-manager-kubernetes-upgrade-209767
	a0c0574d45f2b       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 seconds ago      Exited              kube-apiserver            1                   694c064fb98e2       kube-apiserver-kubernetes-upgrade-209767
	253b1bc04dee2       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 seconds ago      Exited              kube-scheduler            1                   56248ba9b25fa       kube-scheduler-kubernetes-upgrade-209767
	
	
	==> coredns [4290f367ad18d79635b85052552cfa66814c10ee77a8c1842d1991eb84df7829] <==
	
	
	==> coredns [8136055b78d28b07df9c3d12b076f3fcb8afedd54f020e516a40d96721c86bfb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [9bd59663450dd8b07f5fe55336b8775a71b4b0c7f1612c97f9a9e11872db4d13] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e9ec61f64a4fb21999f78063484671982d40b321fd0b20cf01e904abf1830e0a] <==
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-209767
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-209767
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:05:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-209767
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:06:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:06:15 +0000   Sat, 16 Mar 2024 00:05:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:06:15 +0000   Sat, 16 Mar 2024 00:05:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:06:15 +0000   Sat, 16 Mar 2024 00:05:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:06:15 +0000   Sat, 16 Mar 2024 00:05:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.143
	  Hostname:    kubernetes-upgrade-209767
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 9e84758139dd43c5890f902e77f55c7d
	  System UUID:                9e847581-39dd-43c5-890f-902e77f55c7d
	  Boot ID:                    ab83afea-e85e-496b-82a8-4e5fac7eeb46
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-snqmp                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     43s
	  kube-system                 coredns-76f75df574-v9vp7                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     43s
	  kube-system                 etcd-kubernetes-upgrade-209767                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         51s
	  kube-system                 kube-apiserver-kubernetes-upgrade-209767             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         52s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-209767    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         56s
	  kube-system                 kube-proxy-xc8zk                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         43s
	  kube-system                 kube-scheduler-kubernetes-upgrade-209767             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         54s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 42s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    64s (x8 over 65s)  kubelet          Node kubernetes-upgrade-209767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x7 over 65s)  kubelet          Node kubernetes-upgrade-209767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  64s (x8 over 65s)  kubelet          Node kubernetes-upgrade-209767 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           46s                node-controller  Node kubernetes-upgrade-209767 event: Registered Node kubernetes-upgrade-209767 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s (x8 over 9s)    kubelet          Node kubernetes-upgrade-209767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s (x8 over 9s)    kubelet          Node kubernetes-upgrade-209767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s (x7 over 9s)    kubelet          Node kubernetes-upgrade-209767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8s                 kubelet          Updated Node Allocatable limit across pods
	
	
	==> dmesg <==
	[  +0.000112] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Mar16 00:05] systemd-fstab-generator[568]: Ignoring "noauto" option for root device
	[  +0.059646] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066474] systemd-fstab-generator[580]: Ignoring "noauto" option for root device
	[  +0.189215] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.160808] systemd-fstab-generator[606]: Ignoring "noauto" option for root device
	[  +0.282171] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +4.828004] systemd-fstab-generator[727]: Ignoring "noauto" option for root device
	[  +0.067525] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.904674] systemd-fstab-generator[850]: Ignoring "noauto" option for root device
	[ +11.600104] systemd-fstab-generator[1219]: Ignoring "noauto" option for root device
	[  +0.095180] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.224158] kauditd_printk_skb: 21 callbacks suppressed
	[Mar16 00:06] systemd-fstab-generator[2001]: Ignoring "noauto" option for root device
	[  +0.111334] kauditd_printk_skb: 68 callbacks suppressed
	[  +0.102577] systemd-fstab-generator[2034]: Ignoring "noauto" option for root device
	[  +0.553057] systemd-fstab-generator[2282]: Ignoring "noauto" option for root device
	[  +0.273032] systemd-fstab-generator[2356]: Ignoring "noauto" option for root device
	[  +0.838200] systemd-fstab-generator[2609]: Ignoring "noauto" option for root device
	[  +1.500408] systemd-fstab-generator[3045]: Ignoring "noauto" option for root device
	[  +2.997728] systemd-fstab-generator[3547]: Ignoring "noauto" option for root device
	[  +0.099396] kauditd_printk_skb: 286 callbacks suppressed
	[  +5.184573] kauditd_printk_skb: 60 callbacks suppressed
	[  +1.069565] systemd-fstab-generator[4048]: Ignoring "noauto" option for root device
	
	
	==> etcd [2f02bdc6b71ce962c603d180e0bf4b4a19e5b76122c63d26a7ff534f22fb5e89] <==
	{"level":"warn","ts":"2024-03-16T00:06:07.007744Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-16T00:06:07.008018Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.39.143:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.39.143:2380","--initial-cluster=kubernetes-upgrade-209767=https://192.168.39.143:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.39.143:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.39.143:2380","--name=kubernetes-upgrade-209767","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--sna
pshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	{"level":"info","ts":"2024-03-16T00:06:07.008315Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	{"level":"warn","ts":"2024-03-16T00:06:07.027942Z","caller":"embed/config.go:676","msg":"Running http and grpc server on single port. This is not recommended for production."}
	{"level":"info","ts":"2024-03-16T00:06:07.027983Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.39.143:2380"]}
	{"level":"info","ts":"2024-03-16T00:06:07.0281Z","caller":"embed/etcd.go:495","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-16T00:06:07.0537Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.143:2379"]}
	{"level":"info","ts":"2024-03-16T00:06:07.05428Z","caller":"embed/etcd.go:309","msg":"starting an etcd server","etcd-version":"3.5.10","git-sha":"0223ca52b","go-version":"go1.20.10","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"kubernetes-upgrade-209767","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.39.143:2380"],"listen-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.143:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new"
,"initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	{"level":"info","ts":"2024-03-16T00:06:07.156742Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"97.846246ms"}
	{"level":"info","ts":"2024-03-16T00:06:07.175285Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2024-03-16T00:06:07.266347Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","commit-index":407}
	{"level":"info","ts":"2024-03-16T00:06:07.27172Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd switched to configuration voters=()"}
	{"level":"info","ts":"2024-03-16T00:06:07.273597Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became follower at term 2"}
	{"level":"info","ts":"2024-03-16T00:06:07.273757Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft be0eebdc09990bfd [peers: [], term: 2, commit: 407, applied: 0, lastindex: 407, lastterm: 2]"}
	
	
	==> etcd [5b338e519cac28baa70c19c133646c6087e28e9f0337c3d9ccee4d68e8c7b8d8] <==
	{"level":"info","ts":"2024-03-16T00:06:12.893352Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-16T00:06:12.893383Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-16T00:06:12.894368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd switched to configuration voters=(13695142847166614525)"}
	{"level":"info","ts":"2024-03-16T00:06:12.900726Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","added-peer-id":"be0eebdc09990bfd","added-peer-peer-urls":["https://192.168.39.143:2380"]}
	{"level":"info","ts":"2024-03-16T00:06:12.900891Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6857887556ef56db","local-member-id":"be0eebdc09990bfd","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:06:12.90094Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:06:12.929164Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-16T00:06:12.929422Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"be0eebdc09990bfd","initial-advertise-peer-urls":["https://192.168.39.143:2380"],"listen-peer-urls":["https://192.168.39.143:2380"],"advertise-client-urls":["https://192.168.39.143:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.143:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-16T00:06:12.929485Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-16T00:06:12.929607Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2024-03-16T00:06:12.929642Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.143:2380"}
	{"level":"info","ts":"2024-03-16T00:06:13.952362Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-16T00:06:13.95248Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-16T00:06:13.952602Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgPreVoteResp from be0eebdc09990bfd at term 2"}
	{"level":"info","ts":"2024-03-16T00:06:13.952644Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became candidate at term 3"}
	{"level":"info","ts":"2024-03-16T00:06:13.952677Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd received MsgVoteResp from be0eebdc09990bfd at term 3"}
	{"level":"info","ts":"2024-03-16T00:06:13.952706Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"be0eebdc09990bfd became leader at term 3"}
	{"level":"info","ts":"2024-03-16T00:06:13.952732Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: be0eebdc09990bfd elected leader be0eebdc09990bfd at term 3"}
	{"level":"info","ts":"2024-03-16T00:06:13.958183Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:06:13.958179Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"be0eebdc09990bfd","local-member-attributes":"{Name:kubernetes-upgrade-209767 ClientURLs:[https://192.168.39.143:2379]}","request-path":"/0/members/be0eebdc09990bfd/attributes","cluster-id":"6857887556ef56db","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:06:13.958725Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:06:13.959205Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:06:13.959271Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:06:13.961609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:06:13.961609Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.143:2379"}
	
	
	==> kernel <==
	 00:06:20 up 1 min,  0 users,  load average: 1.39, 0.38, 0.13
	Linux kubernetes-upgrade-209767 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a0c0574d45f2bc15cf4d21edf4001c0802256790092d9f27c5c226ace1634057] <==
	I0316 00:06:07.047406       1 options.go:222] external host was not specified, using 192.168.39.143
	I0316 00:06:07.048673       1 server.go:148] Version: v1.29.0-rc.2
	I0316 00:06:07.051711       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	
	
	==> kube-apiserver [f92e3f6cf36de78376231dd5afbfe906b043f90641a151ee3005a011ee60fc83] <==
	I0316 00:06:15.619308       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0316 00:06:15.619332       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0316 00:06:15.620421       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0316 00:06:15.621867       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0316 00:06:15.727094       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0316 00:06:15.730482       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0316 00:06:15.759779       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0316 00:06:15.759929       1 aggregator.go:165] initial CRD sync complete...
	I0316 00:06:15.760053       1 autoregister_controller.go:141] Starting autoregister controller
	I0316 00:06:15.760137       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0316 00:06:15.760203       1 cache.go:39] Caches are synced for autoregister controller
	I0316 00:06:15.760336       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0316 00:06:15.762041       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0316 00:06:15.764160       1 shared_informer.go:318] Caches are synced for configmaps
	I0316 00:06:15.765127       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0316 00:06:15.765192       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0316 00:06:15.765201       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	E0316 00:06:15.778877       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0316 00:06:16.512659       1 controller.go:624] quota admission added evaluator for: endpoints
	I0316 00:06:16.574365       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0316 00:06:17.590329       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0316 00:06:17.614972       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0316 00:06:17.671047       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0316 00:06:17.728191       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0316 00:06:17.743112       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [2358d22d7ee6812a10715bdd240b57a6bb8269245908bd3318244a59fe7e69a0] <==
	I0316 00:06:17.807986       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="ingresses.networking.k8s.io"
	I0316 00:06:17.808153       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="csistoragecapacities.storage.k8s.io"
	I0316 00:06:17.808233       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="leases.coordination.k8s.io"
	I0316 00:06:17.808341       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="podtemplates"
	I0316 00:06:17.808400       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="endpoints"
	I0316 00:06:17.808620       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="cronjobs.batch"
	I0316 00:06:17.808692       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="rolebindings.rbac.authorization.k8s.io"
	I0316 00:06:17.808742       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="daemonsets.apps"
	I0316 00:06:17.809012       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="jobs.batch"
	I0316 00:06:17.809238       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="roles.rbac.authorization.k8s.io"
	I0316 00:06:17.809331       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="limitranges"
	I0316 00:06:17.809388       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" resource="poddisruptionbudgets.policy"
	I0316 00:06:17.809437       1 controllermanager.go:735] "Started controller" controller="resourcequota-controller"
	I0316 00:06:17.809725       1 resource_quota_controller.go:294] "Starting resource quota controller"
	I0316 00:06:17.811192       1 shared_informer.go:311] Waiting for caches to sync for resource quota
	I0316 00:06:17.811593       1 resource_quota_monitor.go:305] "QuotaMonitor running"
	I0316 00:06:17.843830       1 garbagecollector.go:155] "Starting controller" controller="garbagecollector"
	I0316 00:06:17.843848       1 shared_informer.go:311] Waiting for caches to sync for garbage collector
	I0316 00:06:17.843868       1 graph_builder.go:302] "Running" component="GraphBuilder"
	I0316 00:06:17.844120       1 controllermanager.go:735] "Started controller" controller="garbage-collector-controller"
	I0316 00:06:17.854033       1 controllermanager.go:735] "Started controller" controller="ttl-controller"
	I0316 00:06:17.854916       1 ttl_controller.go:124] "Starting TTL controller"
	I0316 00:06:17.854933       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0316 00:06:17.868880       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0316 00:06:17.869281       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	
	
	==> kube-controller-manager [7ea4b98c368683e5a024e02bf207f9211a8df3a9782a64c7763538141c2a3cd2] <==
	
	
	==> kube-proxy [264a8bc433cd69bbd9425b4461d77afcf424245b0b8a9f8e3c5547b4f1634207] <==
	I0316 00:06:16.559955       1 server_others.go:72] "Using iptables proxy"
	I0316 00:06:16.577266       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.143"]
	I0316 00:06:16.647138       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0316 00:06:16.647368       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:06:16.647464       1 server_others.go:168] "Using iptables Proxier"
	I0316 00:06:16.657961       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:06:16.658135       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0316 00:06:16.658168       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:06:16.661401       1 config.go:188] "Starting service config controller"
	I0316 00:06:16.661466       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:06:16.661487       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:06:16.661491       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:06:16.662067       1 config.go:315] "Starting node config controller"
	I0316 00:06:16.662105       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:06:16.762463       1 shared_informer.go:318] Caches are synced for node config
	I0316 00:06:16.762634       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:06:16.762715       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [ed7bd1cbb5ac6e19aea0725db266056bd0abd3c9d118b66812de960132c4e719] <==
	
	
	==> kube-scheduler [253b1bc04dee2730f9f0ee4d4b81973c0b7ab37e274780cebd90305ff355909f] <==
	
	
	==> kube-scheduler [5fbedf67f538f496fa6d2fce235ee9e8985f04bd056d1541d8a08bdd6ae1e6a0] <==
	I0316 00:06:13.391553       1 serving.go:380] Generated self-signed cert in-memory
	W0316 00:06:15.655085       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:06:15.655194       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:06:15.655230       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:06:15.655261       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:06:15.738061       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0316 00:06:15.738292       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:06:15.743874       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:06:15.744606       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:06:15.744843       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:06:15.745018       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:06:15.845633       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:06:12 kubernetes-upgrade-209767 kubelet[3554]: E0316 00:06:12.538721    3554 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.39.143:8443: connect: connection refused" node="kubernetes-upgrade-209767"
	Mar 16 00:06:12 kubernetes-upgrade-209767 kubelet[3554]: W0316 00:06:12.828005    3554 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.143:8443: connect: connection refused
	Mar 16 00:06:12 kubernetes-upgrade-209767 kubelet[3554]: E0316 00:06:12.828139    3554 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.39.143:8443: connect: connection refused
	Mar 16 00:06:12 kubernetes-upgrade-209767 kubelet[3554]: W0316 00:06:12.900358    3554 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-209767&limit=500&resourceVersion=0": dial tcp 192.168.39.143:8443: connect: connection refused
	Mar 16 00:06:12 kubernetes-upgrade-209767 kubelet[3554]: E0316 00:06:12.900415    3554 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-209767&limit=500&resourceVersion=0": dial tcp 192.168.39.143:8443: connect: connection refused
	Mar 16 00:06:13 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:13.340763    3554 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-209767"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.814001    3554 apiserver.go:52] "Watching apiserver"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.815169    3554 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-209767"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.815585    3554 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-209767"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.818812    3554 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.820911    3554 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.828907    3554 topology_manager.go:215] "Topology Admit Handler" podUID="e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d" podNamespace="kube-system" podName="storage-provisioner"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.829194    3554 topology_manager.go:215] "Topology Admit Handler" podUID="af87f494-089e-4d48-8171-819e4f84808b" podNamespace="kube-system" podName="coredns-76f75df574-snqmp"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.829352    3554 topology_manager.go:215] "Topology Admit Handler" podUID="8544c08a-ec13-4973-b035-6d6962b0b040" podNamespace="kube-system" podName="coredns-76f75df574-v9vp7"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.829486    3554 topology_manager.go:215] "Topology Admit Handler" podUID="4c651ffe-df69-4a98-9285-21ef4517ac05" podNamespace="kube-system" podName="kube-proxy-xc8zk"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.831630    3554 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.836067    3554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c651ffe-df69-4a98-9285-21ef4517ac05-xtables-lock\") pod \"kube-proxy-xc8zk\" (UID: \"4c651ffe-df69-4a98-9285-21ef4517ac05\") " pod="kube-system/kube-proxy-xc8zk"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.836261    3554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c651ffe-df69-4a98-9285-21ef4517ac05-lib-modules\") pod \"kube-proxy-xc8zk\" (UID: \"4c651ffe-df69-4a98-9285-21ef4517ac05\") " pod="kube-system/kube-proxy-xc8zk"
	Mar 16 00:06:15 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:15.836604    3554 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d-tmp\") pod \"storage-provisioner\" (UID: \"e9dfeb16-9b95-4a3f-aa29-8d01a5361e1d\") " pod="kube-system/storage-provisioner"
	Mar 16 00:06:16 kubernetes-upgrade-209767 kubelet[3554]: E0316 00:06:16.072974    3554 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-kubernetes-upgrade-209767\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-209767"
	Mar 16 00:06:16 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:16.130362    3554 scope.go:117] "RemoveContainer" containerID="4290f367ad18d79635b85052552cfa66814c10ee77a8c1842d1991eb84df7829"
	Mar 16 00:06:16 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:16.131803    3554 scope.go:117] "RemoveContainer" containerID="e9ec61f64a4fb21999f78063484671982d40b321fd0b20cf01e904abf1830e0a"
	Mar 16 00:06:16 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:16.132218    3554 scope.go:117] "RemoveContainer" containerID="ed7bd1cbb5ac6e19aea0725db266056bd0abd3c9d118b66812de960132c4e719"
	Mar 16 00:06:16 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:16.132708    3554 scope.go:117] "RemoveContainer" containerID="ea9406a701f064aa45a76fd6ba8666dc35b32966b6cc37290c3b6fc7ce277ff7"
	Mar 16 00:06:20 kubernetes-upgrade-209767 kubelet[3554]: I0316 00:06:20.910883    3554 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [e4d8b487d0f4d2f49a19e5b3ddc0f3a10612caf79d580598be77985551f6795b] <==
	I0316 00:06:16.436976       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:06:16.491197       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:06:16.491277       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:06:16.527584       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:06:16.527784       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-209767_deb90d4d-5798-4a4a-904a-94c08ef0a4f6!
	I0316 00:06:16.528424       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"dc376e66-e413-495e-a936-a7452d9320d4", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-209767_deb90d4d-5798-4a4a-904a-94c08ef0a4f6 became leader
	I0316 00:06:16.628651       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-209767_deb90d4d-5798-4a4a-904a-94c08ef0a4f6!
	
	
	==> storage-provisioner [ea9406a701f064aa45a76fd6ba8666dc35b32966b6cc37290c3b6fc7ce277ff7] <==
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-209767 -n kubernetes-upgrade-209767
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-209767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-209767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-209767
--- FAIL: TestKubernetesUpgrade (384.17s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (69.73s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-033460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-033460 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m5.251107112s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-033460] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-033460" primary control-plane node in "pause-033460" cluster
	* Updating the running kvm2 "pause-033460" VM ...
	* Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-033460" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:03:50.427555  117977 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:03:50.427741  117977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:03:50.427754  117977 out.go:304] Setting ErrFile to fd 2...
	I0316 00:03:50.427761  117977 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:03:50.428085  117977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:03:50.428855  117977 out.go:298] Setting JSON to false
	I0316 00:03:50.433093  117977 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9980,"bootTime":1710537450,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:03:50.433195  117977 start.go:139] virtualization: kvm guest
	I0316 00:03:50.435625  117977 out.go:177] * [pause-033460] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:03:50.438365  117977 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:03:50.438324  117977 notify.go:220] Checking for updates...
	I0316 00:03:50.440239  117977 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:03:50.441550  117977 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:03:50.442937  117977 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:03:50.444676  117977 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:03:50.446220  117977 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:03:50.448563  117977 config.go:182] Loaded profile config "pause-033460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:03:50.449145  117977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0316 00:03:50.449202  117977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:03:50.476838  117977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37299
	I0316 00:03:50.477482  117977 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:03:50.478227  117977 main.go:141] libmachine: Using API Version  1
	I0316 00:03:50.478260  117977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:03:50.479128  117977 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:03:50.479404  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:50.479727  117977 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:03:50.480092  117977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0316 00:03:50.480138  117977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:03:50.497512  117977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37817
	I0316 00:03:50.497965  117977 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:03:50.498478  117977 main.go:141] libmachine: Using API Version  1
	I0316 00:03:50.498497  117977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:03:50.499005  117977 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:03:50.499199  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:50.552600  117977 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:03:50.554664  117977 start.go:297] selected driver: kvm2
	I0316 00:03:50.554692  117977 start.go:901] validating driver "kvm2" against &{Name:pause-033460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.28.4 ClusterName:pause-033460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-devic
e-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:03:50.554858  117977 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:03:50.555187  117977 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:03:50.555298  117977 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:03:50.571646  117977 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:03:50.572685  117977 cni.go:84] Creating CNI manager for ""
	I0316 00:03:50.572708  117977 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:03:50.572780  117977 start.go:340] cluster config:
	{Name:pause-033460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:pause-033460 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:fa
lse registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:03:50.572954  117977 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:03:50.574940  117977 out.go:177] * Starting "pause-033460" primary control-plane node in "pause-033460" cluster
	I0316 00:03:50.576600  117977 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:03:50.576633  117977 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0316 00:03:50.576642  117977 cache.go:56] Caching tarball of preloaded images
	I0316 00:03:50.576719  117977 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:03:50.576730  117977 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0316 00:03:50.576867  117977 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/config.json ...
	I0316 00:03:50.577141  117977 start.go:360] acquireMachinesLock for pause-033460: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:03:50.577199  117977 start.go:364] duration metric: took 33.505µs to acquireMachinesLock for "pause-033460"
	I0316 00:03:50.577219  117977 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:03:50.577230  117977 fix.go:54] fixHost starting: 
	I0316 00:03:50.577511  117977 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0316 00:03:50.577545  117977 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:03:50.594036  117977 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38547
	I0316 00:03:50.594501  117977 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:03:50.595079  117977 main.go:141] libmachine: Using API Version  1
	I0316 00:03:50.595115  117977 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:03:50.595536  117977 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:03:50.595743  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:50.595902  117977 main.go:141] libmachine: (pause-033460) Calling .GetState
	I0316 00:03:50.597673  117977 fix.go:112] recreateIfNeeded on pause-033460: state=Running err=<nil>
	W0316 00:03:50.597694  117977 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:03:50.600029  117977 out.go:177] * Updating the running kvm2 "pause-033460" VM ...
	I0316 00:03:50.601581  117977 machine.go:94] provisionDockerMachine start ...
	I0316 00:03:50.601609  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:50.601819  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:50.604367  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:50.604963  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:50.604997  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:50.605143  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:50.605335  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:50.605505  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:50.605657  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:50.605825  117977 main.go:141] libmachine: Using SSH client type: native
	I0316 00:03:50.606015  117977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0316 00:03:50.606028  117977 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:03:50.728251  117977 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-033460
	
	I0316 00:03:50.728281  117977 main.go:141] libmachine: (pause-033460) Calling .GetMachineName
	I0316 00:03:50.728549  117977 buildroot.go:166] provisioning hostname "pause-033460"
	I0316 00:03:50.728578  117977 main.go:141] libmachine: (pause-033460) Calling .GetMachineName
	I0316 00:03:50.728794  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:50.731626  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:50.732015  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:50.732039  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:50.732239  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:50.732427  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:50.732616  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:50.732776  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:50.732952  117977 main.go:141] libmachine: Using SSH client type: native
	I0316 00:03:50.733181  117977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0316 00:03:50.733200  117977 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-033460 && echo "pause-033460" | sudo tee /etc/hostname
	I0316 00:03:50.875024  117977 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-033460
	
	I0316 00:03:50.875060  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:50.878317  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:50.878734  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:50.878761  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:50.878946  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:50.879160  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:50.879329  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:50.879461  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:50.879629  117977 main.go:141] libmachine: Using SSH client type: native
	I0316 00:03:50.879813  117977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0316 00:03:50.879837  117977 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-033460' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-033460/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-033460' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:03:51.008380  117977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:03:51.008414  117977 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:03:51.008454  117977 buildroot.go:174] setting up certificates
	I0316 00:03:51.008468  117977 provision.go:84] configureAuth start
	I0316 00:03:51.008485  117977 main.go:141] libmachine: (pause-033460) Calling .GetMachineName
	I0316 00:03:51.008791  117977 main.go:141] libmachine: (pause-033460) Calling .GetIP
	I0316 00:03:51.011946  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.012383  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:51.012416  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.012578  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:51.015475  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.015902  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:51.015933  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.016086  117977 provision.go:143] copyHostCerts
	I0316 00:03:51.016153  117977 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:03:51.016167  117977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:03:51.016249  117977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:03:51.016380  117977 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:03:51.016393  117977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:03:51.016426  117977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:03:51.016505  117977 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:03:51.016515  117977 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:03:51.016541  117977 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:03:51.016604  117977 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.pause-033460 san=[127.0.0.1 192.168.50.7 localhost minikube pause-033460]
	I0316 00:03:51.163714  117977 provision.go:177] copyRemoteCerts
	I0316 00:03:51.163773  117977 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:03:51.163804  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:51.166613  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.167049  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:51.167078  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.167288  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:51.167513  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:51.167705  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:51.167845  117977 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/pause-033460/id_rsa Username:docker}
	I0316 00:03:51.268407  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:03:51.301586  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0316 00:03:51.336455  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0316 00:03:51.366475  117977 provision.go:87] duration metric: took 357.992278ms to configureAuth
	I0316 00:03:51.366501  117977 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:03:51.366737  117977 config.go:182] Loaded profile config "pause-033460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:03:51.366828  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:51.369671  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.370043  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:51.370066  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:51.370205  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:51.370411  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:51.370707  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:51.370885  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:51.371080  117977 main.go:141] libmachine: Using SSH client type: native
	I0316 00:03:51.371299  117977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0316 00:03:51.371334  117977 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:03:56.945086  117977 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:03:56.945119  117977 machine.go:97] duration metric: took 6.343524875s to provisionDockerMachine
	I0316 00:03:56.945134  117977 start.go:293] postStartSetup for "pause-033460" (driver="kvm2")
	I0316 00:03:56.945147  117977 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:03:56.945165  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:56.945597  117977 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:03:56.945633  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:56.948995  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:56.949549  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:56.949574  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:56.949753  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:56.949970  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:56.950139  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:56.950286  117977 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/pause-033460/id_rsa Username:docker}
	I0316 00:03:57.038930  117977 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:03:57.043295  117977 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:03:57.043341  117977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:03:57.043401  117977 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:03:57.043497  117977 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:03:57.043598  117977 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:03:57.054008  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:03:57.081574  117977 start.go:296] duration metric: took 136.423468ms for postStartSetup
	I0316 00:03:57.081627  117977 fix.go:56] duration metric: took 6.504396309s for fixHost
	I0316 00:03:57.081655  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:57.084341  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.084746  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:57.084780  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.084953  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:57.085170  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:57.085338  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:57.085480  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:57.085652  117977 main.go:141] libmachine: Using SSH client type: native
	I0316 00:03:57.085826  117977 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.7 22 <nil> <nil>}
	I0316 00:03:57.085838  117977 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0316 00:03:57.204249  117977 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710547437.197814915
	
	I0316 00:03:57.204279  117977 fix.go:216] guest clock: 1710547437.197814915
	I0316 00:03:57.204289  117977 fix.go:229] Guest: 2024-03-16 00:03:57.197814915 +0000 UTC Remote: 2024-03-16 00:03:57.081633855 +0000 UTC m=+6.728803672 (delta=116.18106ms)
	I0316 00:03:57.204315  117977 fix.go:200] guest clock delta is within tolerance: 116.18106ms
	I0316 00:03:57.204336  117977 start.go:83] releasing machines lock for "pause-033460", held for 6.62711031s
	I0316 00:03:57.204366  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:57.204665  117977 main.go:141] libmachine: (pause-033460) Calling .GetIP
	I0316 00:03:57.208031  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.208468  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:57.208503  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.208690  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:57.209379  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:57.209595  117977 main.go:141] libmachine: (pause-033460) Calling .DriverName
	I0316 00:03:57.209724  117977 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:03:57.209779  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:57.209802  117977 ssh_runner.go:195] Run: cat /version.json
	I0316 00:03:57.209827  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHHostname
	I0316 00:03:57.212410  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.212561  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.212766  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:57.212795  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.212922  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:03:57.212950  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:03:57.212952  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:57.213180  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHPort
	I0316 00:03:57.213186  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:57.213393  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHKeyPath
	I0316 00:03:57.213396  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:57.213532  117977 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/pause-033460/id_rsa Username:docker}
	I0316 00:03:57.213579  117977 main.go:141] libmachine: (pause-033460) Calling .GetSSHUsername
	I0316 00:03:57.213739  117977 sshutil.go:53] new ssh client: &{IP:192.168.50.7 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/pause-033460/id_rsa Username:docker}
	I0316 00:03:57.300786  117977 ssh_runner.go:195] Run: systemctl --version
	I0316 00:03:57.326593  117977 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:03:57.492330  117977 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:03:57.501138  117977 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:03:57.501215  117977 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:03:57.515374  117977 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0316 00:03:57.515402  117977 start.go:494] detecting cgroup driver to use...
	I0316 00:03:57.515480  117977 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:03:57.538179  117977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:03:57.553395  117977 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:03:57.553472  117977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:03:57.567746  117977 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:03:57.582685  117977 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:03:57.714918  117977 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:03:57.860575  117977 docker.go:233] disabling docker service ...
	I0316 00:03:57.860691  117977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:03:57.878077  117977 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:03:57.892924  117977 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:03:58.032201  117977 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:03:58.165201  117977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:03:58.180212  117977 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:03:58.205687  117977 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:03:58.205741  117977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:03:58.218365  117977 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:03:58.218432  117977 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:03:58.230020  117977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:03:58.241125  117977 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:03:58.251549  117977 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:03:58.262625  117977 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:03:58.272091  117977 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:03:58.281662  117977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:03:58.433166  117977 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:04:06.514139  117977 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.080916315s)
	I0316 00:04:06.514209  117977 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:04:06.514285  117977 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:04:06.521582  117977 start.go:562] Will wait 60s for crictl version
	I0316 00:04:06.521660  117977 ssh_runner.go:195] Run: which crictl
	I0316 00:04:06.526066  117977 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:04:06.574086  117977 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:04:06.574206  117977 ssh_runner.go:195] Run: crio --version
	I0316 00:04:06.610406  117977 ssh_runner.go:195] Run: crio --version
	I0316 00:04:06.642901  117977 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:04:06.644424  117977 main.go:141] libmachine: (pause-033460) Calling .GetIP
	I0316 00:04:06.647840  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:04:06.648280  117977 main.go:141] libmachine: (pause-033460) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8d:52:08", ip: ""} in network mk-pause-033460: {Iface:virbr2 ExpiryTime:2024-03-16 01:02:26 +0000 UTC Type:0 Mac:52:54:00:8d:52:08 Iaid: IPaddr:192.168.50.7 Prefix:24 Hostname:pause-033460 Clientid:01:52:54:00:8d:52:08}
	I0316 00:04:06.648311  117977 main.go:141] libmachine: (pause-033460) DBG | domain pause-033460 has defined IP address 192.168.50.7 and MAC address 52:54:00:8d:52:08 in network mk-pause-033460
	I0316 00:04:06.648532  117977 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:04:06.653415  117977 kubeadm.go:877] updating cluster {Name:pause-033460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:pause-033460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false
olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:04:06.653613  117977 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:04:06.653672  117977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:04:06.712473  117977 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:04:06.712504  117977 crio.go:415] Images already preloaded, skipping extraction
	I0316 00:04:06.712567  117977 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:04:06.748292  117977 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:04:06.748320  117977 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:04:06.748331  117977 kubeadm.go:928] updating node { 192.168.50.7 8443 v1.28.4 crio true true} ...
	I0316 00:04:06.748497  117977 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-033460 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.7
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:pause-033460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:04:06.748576  117977 ssh_runner.go:195] Run: crio config
	I0316 00:04:06.803507  117977 cni.go:84] Creating CNI manager for ""
	I0316 00:04:06.803544  117977 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:04:06.803565  117977 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:04:06.803596  117977 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.7 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-033460 NodeName:pause-033460 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.7"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.7 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:04:06.803825  117977 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.7
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-033460"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.7
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.7"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:04:06.803914  117977 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:04:06.815044  117977 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:04:06.815151  117977 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:04:06.825507  117977 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (310 bytes)
	I0316 00:04:06.846296  117977 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:04:06.866619  117977 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2150 bytes)
	I0316 00:04:06.886643  117977 ssh_runner.go:195] Run: grep 192.168.50.7	control-plane.minikube.internal$ /etc/hosts
	I0316 00:04:06.891220  117977 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:04:07.030912  117977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:04:07.047283  117977 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460 for IP: 192.168.50.7
	I0316 00:04:07.047314  117977 certs.go:194] generating shared ca certs ...
	I0316 00:04:07.047358  117977 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:04:07.047594  117977 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:04:07.047656  117977 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:04:07.047667  117977 certs.go:256] generating profile certs ...
	I0316 00:04:07.047862  117977 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.key
	I0316 00:04:07.047982  117977 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/apiserver.key.266c4d25
	I0316 00:04:07.048022  117977 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/proxy-client.key
	I0316 00:04:07.048150  117977 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:04:07.048182  117977 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:04:07.048192  117977 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:04:07.048221  117977 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:04:07.048250  117977 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:04:07.048270  117977 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:04:07.048378  117977 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:04:07.049128  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:04:07.078998  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:04:07.109702  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:04:07.136551  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:04:07.165769  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0316 00:04:07.191943  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:04:07.217847  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:04:07.244043  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:04:07.274224  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:04:07.301287  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:04:07.333035  117977 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:04:07.358880  117977 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:04:07.376280  117977 ssh_runner.go:195] Run: openssl version
	I0316 00:04:07.382813  117977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:04:07.394156  117977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:07.411836  117977 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:07.411921  117977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:07.422484  117977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:04:07.461507  117977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:04:07.505415  117977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:04:07.537475  117977 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:04:07.537555  117977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:04:07.568279  117977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:04:07.653742  117977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:04:07.700907  117977 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:04:07.755916  117977 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:04:07.756014  117977 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:04:07.790367  117977 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:04:07.836136  117977 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:04:07.862957  117977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:04:08.004270  117977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:04:08.071370  117977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:04:08.103772  117977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:04:08.116632  117977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:04:08.158273  117977 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:04:08.186076  117977 kubeadm.go:391] StartCluster: {Name:pause-033460 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:pause-033460 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.7 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm
:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:04:08.186262  117977 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:04:08.186334  117977 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:04:08.476633  117977 cri.go:89] found id: "3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d"
	I0316 00:04:08.476666  117977 cri.go:89] found id: "c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27"
	I0316 00:04:08.476671  117977 cri.go:89] found id: "5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9"
	I0316 00:04:08.476676  117977 cri.go:89] found id: "7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4"
	I0316 00:04:08.476680  117977 cri.go:89] found id: "d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979"
	I0316 00:04:08.476685  117977 cri.go:89] found id: "3f7aeb76d3f5c55b1e910eed27b4a072c1b5088e2ab3221174614d3fe2a9f44f"
	I0316 00:04:08.476689  117977 cri.go:89] found id: "26659c8c26484b1c9ab850799076bc4d06524a3422f835c60fa9cc68f3bfdf53"
	I0316 00:04:08.476693  117977 cri.go:89] found id: "2af1101ea65f77f96e7ea00b74b6531f09740f336b818978336fbb41e2402575"
	I0316 00:04:08.476696  117977 cri.go:89] found id: "0f53fddef2f60c8bd2970c191aff5d58060a08258c51efca79de062bca9d0f70"
	I0316 00:04:08.476703  117977 cri.go:89] found id: "e6de0795e7a78b8cfe920f2af696c78950dfeeb095f7ca90f3cc661664b8c5d8"
	I0316 00:04:08.476708  117977 cri.go:89] found id: "c55b0d8d6443de0b5fa62943ab9af52a77d2ee165a11bf19f8cf42ab085de90e"
	I0316 00:04:08.476712  117977 cri.go:89] found id: ""
	I0316 00:04:08.476786  117977 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-033460 -n pause-033460
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-033460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-033460 logs -n 25: (1.680551704s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo docker                         | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| start   | -p pause-033460                                      | pause-033460              | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:04 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo find                           | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo crio                           | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-869135                                     | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:03 UTC |
	| delete  | -p force-systemd-env-380757                          | force-systemd-env-380757  | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:03 UTC |
	| start   | -p stopped-upgrade-684927                            | minikube                  | jenkins | v1.26.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:04 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-982877                            | cert-expiration-982877    | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-209767                         | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:04 UTC |
	| start   | -p kubernetes-upgrade-209767                         | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-684927 stop                          | minikube                  | jenkins | v1.26.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:04 UTC |
	| start   | -p stopped-upgrade-684927                            | stopped-upgrade-684927    | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:04:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:04:53.247654  119090 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:04:53.247859  119090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:04:53.247873  119090 out.go:304] Setting ErrFile to fd 2...
	I0316 00:04:53.247880  119090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:04:53.248199  119090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:04:53.248974  119090 out.go:298] Setting JSON to false
	I0316 00:04:53.250315  119090 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10043,"bootTime":1710537450,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:04:53.250405  119090 start.go:139] virtualization: kvm guest
	I0316 00:04:53.253056  119090 out.go:177] * [stopped-upgrade-684927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:04:53.255141  119090 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:04:53.255161  119090 notify.go:220] Checking for updates...
	I0316 00:04:53.256649  119090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:04:53.258124  119090 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:04:53.259505  119090 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:04:53.260929  119090 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:04:53.262302  119090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:04:53.264245  119090 config.go:182] Loaded profile config "stopped-upgrade-684927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0316 00:04:53.264849  119090 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:04:53.264918  119090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:04:53.285291  119090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0316 00:04:53.285816  119090 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:04:53.286456  119090 main.go:141] libmachine: Using API Version  1
	I0316 00:04:53.286482  119090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:04:53.286882  119090 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:04:53.287111  119090 main.go:141] libmachine: (stopped-upgrade-684927) Calling .DriverName
	I0316 00:04:53.289150  119090 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:04:53.290524  119090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:04:53.291007  119090 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:04:53.291057  119090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:04:53.306728  119090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0316 00:04:53.307128  119090 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:04:53.307668  119090 main.go:141] libmachine: Using API Version  1
	I0316 00:04:53.307693  119090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:04:53.308032  119090 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:04:53.308249  119090 main.go:141] libmachine: (stopped-upgrade-684927) Calling .DriverName
	I0316 00:04:53.348448  119090 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:04:53.349994  119090 start.go:297] selected driver: kvm2
	I0316 00:04:53.350026  119090 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-684927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684
927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0316 00:04:53.350149  119090 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:04:53.350883  119090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:04:53.350962  119090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:04:53.366255  119090 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:04:53.366665  119090 cni.go:84] Creating CNI manager for ""
	I0316 00:04:53.366689  119090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:04:53.366764  119090 start.go:340] cluster config:
	{Name:stopped-upgrade-684927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0316 00:04:53.369023  119090 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:04:53.371033  119090 out.go:177] * Starting "stopped-upgrade-684927" primary control-plane node in "stopped-upgrade-684927" cluster
	I0316 00:04:52.014555  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:04:52.152422  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:04:52.186987  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:04:52.215117  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:04:52.243781  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:04:52.282928  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:04:52.313845  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:04:52.342382  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:04:52.376268  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:04:52.412593  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:04:52.440427  118366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:04:52.461852  118366 ssh_runner.go:195] Run: openssl version
	I0316 00:04:52.468628  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:04:52.483741  118366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:04:52.489036  118366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:04:52.489096  118366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:04:52.496613  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:04:52.512277  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:04:52.546339  118366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:52.551526  118366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:52.551593  118366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:52.558178  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:04:52.571259  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:04:52.585217  118366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:04:52.591263  118366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:04:52.591346  118366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:04:52.603028  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:04:52.615501  118366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:04:52.619980  118366 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 00:04:52.620037  118366 kubeadm.go:391] StartCluster: {Name:cert-expiration-982877 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:cert-expiration-982877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:04:52.620125  118366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:04:52.620194  118366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:04:52.663550  118366 cri.go:89] found id: ""
	I0316 00:04:52.663648  118366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 00:04:52.674892  118366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:04:52.686352  118366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:04:52.698165  118366 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:04:52.698181  118366 kubeadm.go:156] found existing configuration files:
	
	I0316 00:04:52.698243  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:04:52.709357  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:04:52.709423  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:04:52.721136  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:04:52.732739  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:04:52.732816  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:04:52.744006  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:04:52.754913  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:04:52.754973  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:04:52.766528  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:04:52.780303  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:04:52.780370  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:04:52.794598  118366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:04:52.934327  118366 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0316 00:04:52.934622  118366 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:04:53.125378  118366 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:04:53.125531  118366 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:04:53.125640  118366 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:04:53.398914  118366 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:04:52.916131  117977 addons.go:505] duration metric: took 424.192887ms for enable addons: enabled=[]
	I0316 00:04:52.996107  117977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:04:53.018521  117977 node_ready.go:35] waiting up to 6m0s for node "pause-033460" to be "Ready" ...
	I0316 00:04:53.023172  117977 node_ready.go:49] node "pause-033460" has status "Ready":"True"
	I0316 00:04:53.023209  117977 node_ready.go:38] duration metric: took 4.648505ms for node "pause-033460" to be "Ready" ...
	I0316 00:04:53.023224  117977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:04:53.038671  117977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-px5pk" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.045885  117977 pod_ready.go:92] pod "coredns-5dd5756b68-px5pk" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.045919  117977 pod_ready.go:81] duration metric: took 7.199094ms for pod "coredns-5dd5756b68-px5pk" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.045931  117977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.151673  117977 pod_ready.go:92] pod "etcd-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.151705  117977 pod_ready.go:81] duration metric: took 105.764484ms for pod "etcd-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.151718  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.527146  117977 pod_ready.go:92] pod "kube-apiserver-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.527178  117977 pod_ready.go:81] duration metric: took 375.450745ms for pod "kube-apiserver-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.527192  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.926677  117977 pod_ready.go:92] pod "kube-controller-manager-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.926715  117977 pod_ready.go:81] duration metric: took 399.513956ms for pod "kube-controller-manager-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.926733  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbw4r" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.326307  117977 pod_ready.go:92] pod "kube-proxy-zbw4r" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:54.326344  117977 pod_ready.go:81] duration metric: took 399.602011ms for pod "kube-proxy-zbw4r" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.326357  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.726188  117977 pod_ready.go:92] pod "kube-scheduler-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:54.726222  117977 pod_ready.go:81] duration metric: took 399.856352ms for pod "kube-scheduler-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.726233  117977 pod_ready.go:38] duration metric: took 1.702994885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:04:54.726254  117977 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:04:54.726316  117977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:04:54.742664  117977 api_server.go:72] duration metric: took 2.250768002s to wait for apiserver process to appear ...
	I0316 00:04:54.742692  117977 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:04:54.742737  117977 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0316 00:04:54.749499  117977 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0316 00:04:54.751161  117977 api_server.go:141] control plane version: v1.28.4
	I0316 00:04:54.751188  117977 api_server.go:131] duration metric: took 8.48764ms to wait for apiserver health ...
	I0316 00:04:54.751199  117977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:04:54.931558  117977 system_pods.go:59] 6 kube-system pods found
	I0316 00:04:54.931591  117977 system_pods.go:61] "coredns-5dd5756b68-px5pk" [97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c] Running
	I0316 00:04:54.931597  117977 system_pods.go:61] "etcd-pause-033460" [02cd1ede-7926-43e9-9b4f-b507c75e5838] Running
	I0316 00:04:54.931602  117977 system_pods.go:61] "kube-apiserver-pause-033460" [a8fc1125-4f29-447f-ad59-5d2332fcb764] Running
	I0316 00:04:54.931607  117977 system_pods.go:61] "kube-controller-manager-pause-033460" [d50419c8-a4d0-4d4a-974e-37b0a9d9e7ad] Running
	I0316 00:04:54.931612  117977 system_pods.go:61] "kube-proxy-zbw4r" [053cbe3c-45a9-44d2-b4a8-c98db95e8175] Running
	I0316 00:04:54.931616  117977 system_pods.go:61] "kube-scheduler-pause-033460" [5aa60435-9003-4902-9a39-7a3f263d5a3c] Running
	I0316 00:04:54.931624  117977 system_pods.go:74] duration metric: took 180.417319ms to wait for pod list to return data ...
	I0316 00:04:54.931633  117977 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:04:55.126073  117977 default_sa.go:45] found service account: "default"
	I0316 00:04:55.126108  117977 default_sa.go:55] duration metric: took 194.467332ms for default service account to be created ...
	I0316 00:04:55.126120  117977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:04:55.334064  117977 system_pods.go:86] 6 kube-system pods found
	I0316 00:04:55.334100  117977 system_pods.go:89] "coredns-5dd5756b68-px5pk" [97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c] Running
	I0316 00:04:55.334108  117977 system_pods.go:89] "etcd-pause-033460" [02cd1ede-7926-43e9-9b4f-b507c75e5838] Running
	I0316 00:04:55.334122  117977 system_pods.go:89] "kube-apiserver-pause-033460" [a8fc1125-4f29-447f-ad59-5d2332fcb764] Running
	I0316 00:04:55.334130  117977 system_pods.go:89] "kube-controller-manager-pause-033460" [d50419c8-a4d0-4d4a-974e-37b0a9d9e7ad] Running
	I0316 00:04:55.334136  117977 system_pods.go:89] "kube-proxy-zbw4r" [053cbe3c-45a9-44d2-b4a8-c98db95e8175] Running
	I0316 00:04:55.334142  117977 system_pods.go:89] "kube-scheduler-pause-033460" [5aa60435-9003-4902-9a39-7a3f263d5a3c] Running
	I0316 00:04:55.334152  117977 system_pods.go:126] duration metric: took 208.024429ms to wait for k8s-apps to be running ...
	I0316 00:04:55.334164  117977 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:04:55.334219  117977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:04:55.355853  117977 system_svc.go:56] duration metric: took 21.678177ms WaitForService to wait for kubelet
	I0316 00:04:55.355890  117977 kubeadm.go:576] duration metric: took 2.863998496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:04:55.355914  117977 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:04:55.527731  117977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:04:55.527760  117977 node_conditions.go:123] node cpu capacity is 2
	I0316 00:04:55.527770  117977 node_conditions.go:105] duration metric: took 171.849965ms to run NodePressure ...
	I0316 00:04:55.527782  117977 start.go:240] waiting for startup goroutines ...
	I0316 00:04:55.527789  117977 start.go:245] waiting for cluster config update ...
	I0316 00:04:55.527795  117977 start.go:254] writing updated cluster config ...
	I0316 00:04:55.528133  117977 ssh_runner.go:195] Run: rm -f paused
	I0316 00:04:55.580235  117977 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:04:55.582291  117977 out.go:177] * Done! kubectl is now configured to use "pause-033460" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.359081268Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=628f4270-fa19-4ca2-8fa8-d718259db9d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.359609512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=628f4270-fa19-4ca2-8fa8-d718259db9d5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.413289565Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=392c9bf9-fccd-42db-b0cc-d7d4de5b8eba name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.413518140Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=392c9bf9-fccd-42db-b0cc-d7d4de5b8eba name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.415107562Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=638a78d9-2696-4e9d-9f2b-761f1f8e89e1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.415720805Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547496415691171,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=638a78d9-2696-4e9d-9f2b-761f1f8e89e1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.417392853Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6c4b1b5-7fe4-4ac6-94d3-09a5adf987d1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.417499325Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b6c4b1b5-7fe4-4ac6-94d3-09a5adf987d1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.417901403Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b6c4b1b5-7fe4-4ac6-94d3-09a5adf987d1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.493621398Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89ccd43e-ce92-4a0c-aa27-bd46bd20584f name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.493715439Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89ccd43e-ce92-4a0c-aa27-bd46bd20584f name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.494920918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f87dfd4b-dfb0-43c1-9f5e-0fd7edfeb045 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.495315921Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547496495289563,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f87dfd4b-dfb0-43c1-9f5e-0fd7edfeb045 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.495918202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c6abe093-91e6-4ee1-a97d-881e644ea358 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.495990341Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c6abe093-91e6-4ee1-a97d-881e644ea358 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.496239395Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c6abe093-91e6-4ee1-a97d-881e644ea358 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.552107202Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1a69e971-6860-4138-9de5-600136b94212 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.552232946Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1a69e971-6860-4138-9de5-600136b94212 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.553616860Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b699837-0879-4b73-9881-2607cd98f6f6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.554182002Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547496554144743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b699837-0879-4b73-9881-2607cd98f6f6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.554723529Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=89f37fe9-d026-43b9-b509-4b586fed579c name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.554819656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=89f37fe9-d026-43b9-b509-4b586fed579c name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.555174007Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=89f37fe9-d026-43b9-b509-4b586fed579c name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.569134952Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="otel-collector/interceptors.go:62" id=e887a497-da6e-4c7f-8c34-5f8d2429667c name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:56 pause-033460 crio[2288]: time="2024-03-16 00:04:56.569248679Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e887a497-da6e-4c7f-8c34-5f8d2429667c name=/runtime.v1.RuntimeService/Version
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b8a5530484e3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   20 seconds ago      Running             kube-proxy                2                   5344a8bad2a6c       kube-proxy-zbw4r
	cacdbc3b880cc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   24 seconds ago      Running             kube-controller-manager   2                   036c82fddf805       kube-controller-manager-pause-033460
	6774f685c0e50       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   24 seconds ago      Running             kube-scheduler            2                   54d9fc9480eb4       kube-scheduler-pause-033460
	00b89a48cb098       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   24 seconds ago      Running             kube-apiserver            2                   bd7c7f51f2f90       kube-apiserver-pause-033460
	99c11f33033d6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago      Running             etcd                      2                   83711b8742f24       etcd-pause-033460
	5945c975bf56d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   27 seconds ago      Running             coredns                   2                   6679c749bfbc1       coredns-5dd5756b68-px5pk
	3e29e1aa5b91a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   48 seconds ago      Exited              coredns                   1                   6679c749bfbc1       coredns-5dd5756b68-px5pk
	5bdd14598b4a0       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   48 seconds ago      Exited              kube-proxy                1                   5344a8bad2a6c       kube-proxy-zbw4r
	c2ade6bc9d21f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   48 seconds ago      Exited              kube-apiserver            1                   bd7c7f51f2f90       kube-apiserver-pause-033460
	5e8f5df1ebf95       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   48 seconds ago      Exited              etcd                      1                   83711b8742f24       etcd-pause-033460
	7b90ea0d40f58       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   48 seconds ago      Exited              kube-controller-manager   1                   036c82fddf805       kube-controller-manager-pause-033460
	d95cb14f612e5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   48 seconds ago      Exited              kube-scheduler            1                   54d9fc9480eb4       kube-scheduler-pause-033460
	
	
	==> coredns [3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59900 - 29446 "HINFO IN 226552963704854645.6329232445282991096. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.069001884s
	
	
	==> coredns [5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46008 - 27127 "HINFO IN 589809601669171669.1941158714043203276. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.078045559s
	
	
	==> describe nodes <==
	Name:               pause-033460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-033460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=pause-033460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_02_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:02:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-033460
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:04:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    pause-033460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 17216e1a6d7b45a28f4aa9ce3d9fd455
	  System UUID:                17216e1a-6d7b-45a2-8f4a-a9ce3d9fd455
	  Boot ID:                    b62ffe3a-a73a-41b2-b057-7f3eda6fb76a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-px5pk                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     108s
	  kube-system                 etcd-pause-033460                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m
	  kube-system                 kube-apiserver-pause-033460             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-controller-manager-pause-033460    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-proxy-zbw4r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         108s
	  kube-system                 kube-scheduler-pause-033460             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 107s               kube-proxy       
	  Normal   Starting                 20s                kube-proxy       
	  Normal   Starting                 45s                kube-proxy       
	  Normal   Starting                 2m1s               kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m                 kubelet          Node pause-033460 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m                 kubelet          Node pause-033460 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                2m                 kubelet          Node pause-033460 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  2m                 kubelet          Node pause-033460 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           108s               node-controller  Node pause-033460 event: Registered Node pause-033460 in Controller
	  Warning  ContainerGCFailed        60s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 26s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  25s (x8 over 25s)  kubelet          Node pause-033460 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    25s (x8 over 25s)  kubelet          Node pause-033460 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     25s (x7 over 25s)  kubelet          Node pause-033460 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  25s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           8s                 node-controller  Node pause-033460 event: Registered Node pause-033460 in Controller
	
	
	==> dmesg <==
	[  +0.058833] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060019] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.184085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.128962] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.262769] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.211609] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +0.065012] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.063661] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.932475] kauditd_printk_skb: 50 callbacks suppressed
	[  +9.375421] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.093965] kauditd_printk_skb: 37 callbacks suppressed
	[Mar16 00:03] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.088336] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[ +11.116541] kauditd_printk_skb: 80 callbacks suppressed
	[ +37.721782] systemd-fstab-generator[2211]: Ignoring "noauto" option for root device
	[  +0.134302] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.174911] systemd-fstab-generator[2237]: Ignoring "noauto" option for root device
	[  +0.140981] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.260714] systemd-fstab-generator[2273]: Ignoring "noauto" option for root device
	[Mar16 00:04] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.622676] kauditd_printk_skb: 83 callbacks suppressed
	[ +11.011416] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +5.729068] kauditd_printk_skb: 47 callbacks suppressed
	[ +16.451119] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	
	
	==> etcd [5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9] <==
	{"level":"info","ts":"2024-03-16T00:04:08.793953Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:04:10.291716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-16T00:04:10.291831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-16T00:04:10.291877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-03-16T00:04:10.291895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.291902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.291911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.29192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.29923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:10.299521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:10.300814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-03-16T00:04:10.30083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:04:10.300937Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:10.30098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:10.299275Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:pause-033460 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:04:28.926613Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-16T00:04:28.926839Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-033460","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"]}
	{"level":"warn","ts":"2024-03-16T00:04:28.927005Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-16T00:04:28.927077Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-16T00:04:28.928994Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-16T00:04:28.929063Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-16T00:04:28.930636Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"856b77cd5251110c","current-leader-member-id":"856b77cd5251110c"}
	{"level":"info","ts":"2024-03-16T00:04:28.934698Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-03-16T00:04:28.934806Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-03-16T00:04:28.934827Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-033460","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"]}
	
	
	==> etcd [99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613] <==
	{"level":"info","ts":"2024-03-16T00:04:34.581023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 4"}
	{"level":"info","ts":"2024-03-16T00:04:34.581039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 4"}
	{"level":"info","ts":"2024-03-16T00:04:34.581047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 4"}
	{"level":"info","ts":"2024-03-16T00:04:34.583527Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:pause-033460 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:04:34.583586Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:34.584629Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:04:34.584856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:34.585809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-03-16T00:04:34.603594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:34.603746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:51.709238Z","caller":"traceutil/trace.go:171","msg":"trace[236945713] linearizableReadLoop","detail":"{readStateIndex:582; appliedIndex:581; }","duration":"441.648644ms","start":"2024-03-16T00:04:51.267571Z","end":"2024-03-16T00:04:51.709219Z","steps":["trace[236945713] 'read index received'  (duration: 441.469727ms)","trace[236945713] 'applied index is now lower than readState.Index'  (duration: 178.125µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-16T00:04:51.709767Z","caller":"traceutil/trace.go:171","msg":"trace[227077965] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"525.061339ms","start":"2024-03-16T00:04:51.184692Z","end":"2024-03-16T00:04:51.709753Z","steps":["trace[227077965] 'process raft request'  (duration: 524.395745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:04:51.709872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"442.288235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-033460\" ","response":"range_response_count:1 size:5258"}
	{"level":"info","ts":"2024-03-16T00:04:51.709975Z","caller":"traceutil/trace.go:171","msg":"trace[89413010] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-033460; range_end:; response_count:1; response_revision:538; }","duration":"442.432355ms","start":"2024-03-16T00:04:51.267533Z","end":"2024-03-16T00:04:51.709966Z","steps":["trace[89413010] 'agreement among raft nodes before linearized reading'  (duration: 442.259823ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:04:51.710003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:04:51.267479Z","time spent":"442.516265ms","remote":"127.0.0.1:33010","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5282,"request content":"key:\"/registry/pods/kube-system/etcd-pause-033460\" "}
	{"level":"warn","ts":"2024-03-16T00:04:51.710277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.390029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-16T00:04:51.710372Z","caller":"traceutil/trace.go:171","msg":"trace[1723036101] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:538; }","duration":"228.43613ms","start":"2024-03-16T00:04:51.481881Z","end":"2024-03-16T00:04:51.710317Z","steps":["trace[1723036101] 'agreement among raft nodes before linearized reading'  (duration: 228.376628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:04:51.71055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:04:51.184671Z","time spent":"525.21706ms","remote":"127.0.0.1:33010","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5243,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-033460\" mod_revision:475 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-033460\" value_size:5191 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-033460\" > >"}
	{"level":"warn","ts":"2024-03-16T00:04:52.347684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.16991ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1228513223541496357 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" value_size:6085 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-16T00:04:52.348288Z","caller":"traceutil/trace.go:171","msg":"trace[1708351653] linearizableReadLoop","detail":"{readStateIndex:583; appliedIndex:582; }","duration":"119.630192ms","start":"2024-03-16T00:04:52.228626Z","end":"2024-03-16T00:04:52.348256Z","steps":["trace[1708351653] 'read index received'  (duration: 49.631µs)","trace[1708351653] 'applied index is now lower than readState.Index'  (duration: 119.579206ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:04:52.348509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.912822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" ","response":"range_response_count:1 size:6171"}
	{"level":"info","ts":"2024-03-16T00:04:52.34865Z","caller":"traceutil/trace.go:171","msg":"trace[1857772743] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-033460; range_end:; response_count:1; response_revision:539; }","duration":"120.065089ms","start":"2024-03-16T00:04:52.228576Z","end":"2024-03-16T00:04:52.348641Z","steps":["trace[1857772743] 'agreement among raft nodes before linearized reading'  (duration: 119.81778ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:04:52.348307Z","caller":"traceutil/trace.go:171","msg":"trace[2029435046] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"619.316814ms","start":"2024-03-16T00:04:51.728961Z","end":"2024-03-16T00:04:52.348278Z","steps":["trace[2029435046] 'process raft request'  (duration: 248.537337ms)","trace[2029435046] 'compare'  (duration: 369.029093ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:04:52.351449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:04:51.72895Z","time spent":"622.213504ms","remote":"127.0.0.1:33010","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6156,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" value_size:6085 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" > >"}
	{"level":"warn","ts":"2024-03-16T00:04:53.004723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.058711ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1228513223541496364 > lease_revoke:<id:110c8e44930f9918>","response":"size:29"}
	
	
	==> kernel <==
	 00:04:57 up 2 min,  0 users,  load average: 0.62, 0.29, 0.11
	Linux pause-033460 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194] <==
	I0316 00:04:36.103033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0316 00:04:36.113730       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0316 00:04:36.118600       1 aggregator.go:166] initial CRD sync complete...
	I0316 00:04:36.118819       1 autoregister_controller.go:141] Starting autoregister controller
	I0316 00:04:36.118906       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0316 00:04:36.118941       1 cache.go:39] Caches are synced for autoregister controller
	I0316 00:04:36.905953       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0316 00:04:37.143671       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.50.7]
	I0316 00:04:37.144966       1 controller.go:624] quota admission added evaluator for: endpoints
	I0316 00:04:37.149497       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0316 00:04:37.642724       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0316 00:04:37.654599       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0316 00:04:37.700015       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0316 00:04:37.732213       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0316 00:04:37.739099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0316 00:04:51.713902       1 trace.go:236] Trace[1239299117]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c7e86df4-beaf-407a-8fcf-1710601ca75f,client:192.168.50.7,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-033460/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (16-Mar-2024 00:04:51.180) (total time: 532ms):
	Trace[1239299117]: ["GuaranteedUpdate etcd3" audit-id:c7e86df4-beaf-407a-8fcf-1710601ca75f,key:/pods/kube-system/etcd-pause-033460,type:*core.Pod,resource:pods 532ms (00:04:51.181)
	Trace[1239299117]:  ---"Txn call completed" 527ms (00:04:51.711)]
	Trace[1239299117]: ---"Object stored in database" 528ms (00:04:51.711)
	Trace[1239299117]: [532.669569ms] [532.669569ms] END
	I0316 00:04:52.356768       1 trace.go:236] Trace[85279964]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:eedaf045-5b06-4beb-9a64-965e4de0a538,client:192.168.50.7,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-033460/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (16-Mar-2024 00:04:51.723) (total time: 632ms):
	Trace[85279964]: ["GuaranteedUpdate etcd3" audit-id:eedaf045-5b06-4beb-9a64-965e4de0a538,key:/pods/kube-system/kube-controller-manager-pause-033460,type:*core.Pod,resource:pods 632ms (00:04:51.724)
	Trace[85279964]:  ---"Txn call completed" 627ms (00:04:52.355)]
	Trace[85279964]: ---"Object stored in database" 628ms (00:04:52.355)
	Trace[85279964]: [632.728595ms] [632.728595ms] END
	
	
	==> kube-apiserver [c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27] <==
	I0316 00:04:18.846654       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0316 00:04:18.846663       1 controller.go:129] Ending legacy_token_tracking_controller
	I0316 00:04:18.846666       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0316 00:04:18.846677       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0316 00:04:18.846692       1 available_controller.go:439] Shutting down AvailableConditionController
	I0316 00:04:18.846705       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0316 00:04:18.846719       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0316 00:04:18.846728       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0316 00:04:18.847091       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0316 00:04:18.847306       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0316 00:04:18.847443       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0316 00:04:18.847590       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0316 00:04:18.847691       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0316 00:04:18.847717       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0316 00:04:18.847726       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0316 00:04:18.847759       1 controller.go:159] Shutting down quota evaluator
	I0316 00:04:18.847766       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.849245       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0316 00:04:18.849292       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0316 00:04:18.851405       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0316 00:04:18.851470       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0316 00:04:18.851501       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.851515       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.851521       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.851525       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4] <==
	I0316 00:04:13.762544       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0316 00:04:13.764267       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0316 00:04:13.764562       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0316 00:04:13.764594       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0316 00:04:13.767162       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0316 00:04:13.767399       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0316 00:04:13.767464       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0316 00:04:13.770517       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0316 00:04:13.770633       1 job_controller.go:226] "Starting job controller"
	I0316 00:04:13.770672       1 shared_informer.go:311] Waiting for caches to sync for job
	I0316 00:04:13.773964       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0316 00:04:13.774030       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0316 00:04:13.774219       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0316 00:04:13.776956       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0316 00:04:13.777266       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0316 00:04:13.777303       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0316 00:04:13.781405       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0316 00:04:13.781519       1 ttl_controller.go:124] "Starting TTL controller"
	I0316 00:04:13.781697       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0316 00:04:13.784403       1 shared_informer.go:318] Caches are synced for tokens
	W0316 00:04:23.786035       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	W0316 00:04:24.287097       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	W0316 00:04:25.288977       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	W0316 00:04:27.289679       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	E0316 00:04:27.289875       1 cidr_allocator.go:156] "Failed to list all nodes" err="Get \"https://192.168.50.7:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-controller-manager [cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34] <==
	I0316 00:04:48.579673       1 taint_manager.go:210] "Sending events to api server"
	I0316 00:04:48.579568       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0316 00:04:48.579946       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-033460"
	I0316 00:04:48.580078       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0316 00:04:48.580164       1 event.go:307] "Event occurred" object="pause-033460" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-033460 event: Registered Node pause-033460 in Controller"
	I0316 00:04:48.582639       1 shared_informer.go:318] Caches are synced for GC
	I0316 00:04:48.583892       1 shared_informer.go:318] Caches are synced for PV protection
	I0316 00:04:48.588985       1 shared_informer.go:318] Caches are synced for HPA
	I0316 00:04:48.589256       1 shared_informer.go:318] Caches are synced for expand
	I0316 00:04:48.596148       1 shared_informer.go:318] Caches are synced for job
	I0316 00:04:48.616134       1 shared_informer.go:318] Caches are synced for TTL
	I0316 00:04:48.616417       1 shared_informer.go:318] Caches are synced for resource quota
	I0316 00:04:48.621997       1 shared_informer.go:318] Caches are synced for deployment
	I0316 00:04:48.631446       1 shared_informer.go:318] Caches are synced for disruption
	I0316 00:04:48.659813       1 shared_informer.go:318] Caches are synced for resource quota
	I0316 00:04:48.675186       1 shared_informer.go:318] Caches are synced for daemon sets
	I0316 00:04:48.685408       1 shared_informer.go:318] Caches are synced for stateful set
	I0316 00:04:48.700939       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0316 00:04:48.701217       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0316 00:04:48.702495       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0316 00:04:48.702588       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0316 00:04:48.732560       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0316 00:04:49.129552       1 shared_informer.go:318] Caches are synced for garbage collector
	I0316 00:04:49.129609       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0316 00:04:49.135528       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046] <==
	I0316 00:04:09.508213       1 server_others.go:69] "Using iptables proxy"
	I0316 00:04:11.744985       1 node.go:141] Successfully retrieved node IP: 192.168.50.7
	I0316 00:04:11.808449       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:04:11.808492       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:04:11.811292       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:04:11.811439       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:04:11.811688       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:04:11.811718       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:11.813453       1 config.go:188] "Starting service config controller"
	I0316 00:04:11.813505       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:04:11.813531       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:04:11.813555       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:04:11.818091       1 config.go:315] "Starting node config controller"
	I0316 00:04:11.818136       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:04:11.915920       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:04:11.916089       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:04:11.919187       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b] <==
	I0316 00:04:36.455557       1 server_others.go:69] "Using iptables proxy"
	I0316 00:04:36.467650       1 node.go:141] Successfully retrieved node IP: 192.168.50.7
	I0316 00:04:36.505956       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:04:36.505979       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:04:36.508602       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:04:36.508678       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:04:36.508958       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:04:36.509145       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:36.510019       1 config.go:188] "Starting service config controller"
	I0316 00:04:36.510089       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:04:36.510123       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:04:36.510139       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:04:36.510705       1 config.go:315] "Starting node config controller"
	I0316 00:04:36.510892       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:04:36.610186       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:04:36.610271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:04:36.611723       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f] <==
	I0316 00:04:32.913748       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:04:35.974961       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:04:35.975013       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:04:35.975024       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:04:35.975031       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:04:36.043992       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:04:36.044932       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:36.052083       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:04:36.052437       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:04:36.053618       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:04:36.052494       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:04:36.154684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979] <==
	I0316 00:04:09.620432       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:04:11.676114       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:04:11.676272       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:04:11.676305       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:04:11.676414       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:04:11.733797       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:04:11.734043       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:11.739539       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:04:11.741551       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:04:11.741668       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:04:11.741854       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:04:11.842513       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:04:29.299265       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0316 00:04:29.299474       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0316 00:04:29.299774       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.626590    3203 scope.go:117] "RemoveContainer" containerID="5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.628617    3203 scope.go:117] "RemoveContainer" containerID="c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.630752    3203 scope.go:117] "RemoveContainer" containerID="7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.631841    3203 scope.go:117] "RemoveContainer" containerID="d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.716146    3203 kubelet_node_status.go:70] "Attempting to register node" node="pause-033460"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: E0316 00:04:31.717406    3203 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.7:8443: connect: connection refused" node="pause-033460"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: W0316 00:04:31.964015    3203 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:31 pause-033460 kubelet[3203]: E0316 00:04:31.964091    3203 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:31 pause-033460 kubelet[3203]: W0316 00:04:31.986900    3203 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-033460&limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:31 pause-033460 kubelet[3203]: E0316 00:04:31.986974    3203 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-033460&limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:32 pause-033460 kubelet[3203]: W0316 00:04:32.008704    3203 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:32 pause-033460 kubelet[3203]: E0316 00:04:32.008777    3203 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:32 pause-033460 kubelet[3203]: I0316 00:04:32.519759    3203 kubelet_node_status.go:70] "Attempting to register node" node="pause-033460"
	Mar 16 00:04:35 pause-033460 kubelet[3203]: I0316 00:04:35.977874    3203 apiserver.go:52] "Watching apiserver"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.014056    3203 topology_manager.go:215] "Topology Admit Handler" podUID="97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c" podNamespace="kube-system" podName="coredns-5dd5756b68-px5pk"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.014875    3203 topology_manager.go:215] "Topology Admit Handler" podUID="053cbe3c-45a9-44d2-b4a8-c98db95e8175" podNamespace="kube-system" podName="kube-proxy-zbw4r"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.089660    3203 kubelet_node_status.go:108] "Node was previously registered" node="pause-033460"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.089903    3203 kubelet_node_status.go:73] "Successfully registered node" node="pause-033460"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.091215    3203 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.092108    3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.103783    3203 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: E0316 00:04:36.160746    3203 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-033460\" already exists" pod="kube-system/kube-scheduler-pause-033460"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.203676    3203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/053cbe3c-45a9-44d2-b4a8-c98db95e8175-lib-modules\") pod \"kube-proxy-zbw4r\" (UID: \"053cbe3c-45a9-44d2-b4a8-c98db95e8175\") " pod="kube-system/kube-proxy-zbw4r"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.203764    3203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/053cbe3c-45a9-44d2-b4a8-c98db95e8175-xtables-lock\") pod \"kube-proxy-zbw4r\" (UID: \"053cbe3c-45a9-44d2-b4a8-c98db95e8175\") " pod="kube-system/kube-proxy-zbw4r"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.315962    3203 scope.go:117] "RemoveContainer" containerID="5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-033460 -n pause-033460
helpers_test.go:261: (dbg) Run:  kubectl --context pause-033460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-033460 -n pause-033460
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-033460 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-033460 logs -n 25: (1.377620054s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo docker                         | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| start   | -p pause-033460                                      | pause-033460              | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:04 UTC |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo cat                            | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo                                | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo find                           | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-869135 sudo crio                           | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p cilium-869135                                     | cilium-869135             | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:03 UTC |
	| delete  | -p force-systemd-env-380757                          | force-systemd-env-380757  | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:03 UTC |
	| start   | -p stopped-upgrade-684927                            | minikube                  | jenkins | v1.26.0 | 16 Mar 24 00:03 UTC | 16 Mar 24 00:04 UTC |
	|         | --memory=2200 --vm-driver=kvm2                       |                           |         |         |                     |                     |
	|         |  --container-runtime=crio                            |                           |         |         |                     |                     |
	| start   | -p cert-expiration-982877                            | cert-expiration-982877    | jenkins | v1.32.0 | 16 Mar 24 00:03 UTC |                     |
	|         | --memory=2048                                        |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-209767                         | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:04 UTC |
	| start   | -p kubernetes-upgrade-209767                         | kubernetes-upgrade-209767 | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                    |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| stop    | stopped-upgrade-684927 stop                          | minikube                  | jenkins | v1.26.0 | 16 Mar 24 00:04 UTC | 16 Mar 24 00:04 UTC |
	| start   | -p stopped-upgrade-684927                            | stopped-upgrade-684927    | jenkins | v1.32.0 | 16 Mar 24 00:04 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:04:53
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:04:53.247654  119090 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:04:53.247859  119090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:04:53.247873  119090 out.go:304] Setting ErrFile to fd 2...
	I0316 00:04:53.247880  119090 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:04:53.248199  119090 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:04:53.248974  119090 out.go:298] Setting JSON to false
	I0316 00:04:53.250315  119090 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10043,"bootTime":1710537450,"procs":219,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:04:53.250405  119090 start.go:139] virtualization: kvm guest
	I0316 00:04:53.253056  119090 out.go:177] * [stopped-upgrade-684927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:04:53.255141  119090 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:04:53.255161  119090 notify.go:220] Checking for updates...
	I0316 00:04:53.256649  119090 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:04:53.258124  119090 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:04:53.259505  119090 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:04:53.260929  119090 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:04:53.262302  119090 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:04:53.264245  119090 config.go:182] Loaded profile config "stopped-upgrade-684927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I0316 00:04:53.264849  119090 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:04:53.264918  119090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:04:53.285291  119090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33181
	I0316 00:04:53.285816  119090 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:04:53.286456  119090 main.go:141] libmachine: Using API Version  1
	I0316 00:04:53.286482  119090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:04:53.286882  119090 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:04:53.287111  119090 main.go:141] libmachine: (stopped-upgrade-684927) Calling .DriverName
	I0316 00:04:53.289150  119090 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:04:53.290524  119090 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:04:53.291007  119090 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:04:53.291057  119090 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:04:53.306728  119090 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33981
	I0316 00:04:53.307128  119090 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:04:53.307668  119090 main.go:141] libmachine: Using API Version  1
	I0316 00:04:53.307693  119090 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:04:53.308032  119090 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:04:53.308249  119090 main.go:141] libmachine: (stopped-upgrade-684927) Calling .DriverName
	I0316 00:04:53.348448  119090 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:04:53.349994  119090 start.go:297] selected driver: kvm2
	I0316 00:04:53.350026  119090 start.go:901] validating driver "kvm2" against &{Name:stopped-upgrade-684927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684
927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0316 00:04:53.350149  119090 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:04:53.350883  119090 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:04:53.350962  119090 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:04:53.366255  119090 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:04:53.366665  119090 cni.go:84] Creating CNI manager for ""
	I0316 00:04:53.366689  119090 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:04:53.366764  119090 start.go:340] cluster config:
	{Name:stopped-upgrade-684927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.26.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.32@sha256:9190bd2393eae887316c97a74370b7d5dad8f0b2ef91ac2662bc36f7ef8e0b95 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.1 ClusterName:stopped-upgrade-684927 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.203 Port:8443 KubernetesVersion:v1.24.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClient
Path: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:0s}
	I0316 00:04:53.369023  119090 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:04:53.371033  119090 out.go:177] * Starting "stopped-upgrade-684927" primary control-plane node in "stopped-upgrade-684927" cluster
	I0316 00:04:52.014555  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:04:52.152422  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:04:52.186987  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:04:52.215117  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:04:52.243781  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:04:52.282928  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:04:52.313845  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/cert-expiration-982877/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:04:52.342382  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:04:52.376268  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:04:52.412593  118366 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:04:52.440427  118366 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:04:52.461852  118366 ssh_runner.go:195] Run: openssl version
	I0316 00:04:52.468628  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:04:52.483741  118366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:04:52.489036  118366 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:04:52.489096  118366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:04:52.496613  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:04:52.512277  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:04:52.546339  118366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:52.551526  118366 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:52.551593  118366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:04:52.558178  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:04:52.571259  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:04:52.585217  118366 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:04:52.591263  118366 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:04:52.591346  118366 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:04:52.603028  118366 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:04:52.615501  118366 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:04:52.619980  118366 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 00:04:52.620037  118366 kubeadm.go:391] StartCluster: {Name:cert-expiration-982877 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.28.4 ClusterName:cert-expiration-982877 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.185 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false D
isableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:04:52.620125  118366 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:04:52.620194  118366 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:04:52.663550  118366 cri.go:89] found id: ""
	I0316 00:04:52.663648  118366 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 00:04:52.674892  118366 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:04:52.686352  118366 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:04:52.698165  118366 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:04:52.698181  118366 kubeadm.go:156] found existing configuration files:
	
	I0316 00:04:52.698243  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:04:52.709357  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:04:52.709423  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:04:52.721136  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:04:52.732739  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:04:52.732816  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:04:52.744006  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:04:52.754913  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:04:52.754973  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:04:52.766528  118366 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:04:52.780303  118366 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:04:52.780370  118366 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:04:52.794598  118366 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:04:52.934327  118366 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0316 00:04:52.934622  118366 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:04:53.125378  118366 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:04:53.125531  118366 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:04:53.125640  118366 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:04:53.398914  118366 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:04:52.916131  117977 addons.go:505] duration metric: took 424.192887ms for enable addons: enabled=[]
	I0316 00:04:52.996107  117977 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:04:53.018521  117977 node_ready.go:35] waiting up to 6m0s for node "pause-033460" to be "Ready" ...
	I0316 00:04:53.023172  117977 node_ready.go:49] node "pause-033460" has status "Ready":"True"
	I0316 00:04:53.023209  117977 node_ready.go:38] duration metric: took 4.648505ms for node "pause-033460" to be "Ready" ...
	I0316 00:04:53.023224  117977 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:04:53.038671  117977 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-px5pk" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.045885  117977 pod_ready.go:92] pod "coredns-5dd5756b68-px5pk" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.045919  117977 pod_ready.go:81] duration metric: took 7.199094ms for pod "coredns-5dd5756b68-px5pk" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.045931  117977 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.151673  117977 pod_ready.go:92] pod "etcd-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.151705  117977 pod_ready.go:81] duration metric: took 105.764484ms for pod "etcd-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.151718  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.527146  117977 pod_ready.go:92] pod "kube-apiserver-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.527178  117977 pod_ready.go:81] duration metric: took 375.450745ms for pod "kube-apiserver-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.527192  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.926677  117977 pod_ready.go:92] pod "kube-controller-manager-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:53.926715  117977 pod_ready.go:81] duration metric: took 399.513956ms for pod "kube-controller-manager-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:53.926733  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-zbw4r" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.326307  117977 pod_ready.go:92] pod "kube-proxy-zbw4r" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:54.326344  117977 pod_ready.go:81] duration metric: took 399.602011ms for pod "kube-proxy-zbw4r" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.326357  117977 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.726188  117977 pod_ready.go:92] pod "kube-scheduler-pause-033460" in "kube-system" namespace has status "Ready":"True"
	I0316 00:04:54.726222  117977 pod_ready.go:81] duration metric: took 399.856352ms for pod "kube-scheduler-pause-033460" in "kube-system" namespace to be "Ready" ...
	I0316 00:04:54.726233  117977 pod_ready.go:38] duration metric: took 1.702994885s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:04:54.726254  117977 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:04:54.726316  117977 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:04:54.742664  117977 api_server.go:72] duration metric: took 2.250768002s to wait for apiserver process to appear ...
	I0316 00:04:54.742692  117977 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:04:54.742737  117977 api_server.go:253] Checking apiserver healthz at https://192.168.50.7:8443/healthz ...
	I0316 00:04:54.749499  117977 api_server.go:279] https://192.168.50.7:8443/healthz returned 200:
	ok
	I0316 00:04:54.751161  117977 api_server.go:141] control plane version: v1.28.4
	I0316 00:04:54.751188  117977 api_server.go:131] duration metric: took 8.48764ms to wait for apiserver health ...
	I0316 00:04:54.751199  117977 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:04:54.931558  117977 system_pods.go:59] 6 kube-system pods found
	I0316 00:04:54.931591  117977 system_pods.go:61] "coredns-5dd5756b68-px5pk" [97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c] Running
	I0316 00:04:54.931597  117977 system_pods.go:61] "etcd-pause-033460" [02cd1ede-7926-43e9-9b4f-b507c75e5838] Running
	I0316 00:04:54.931602  117977 system_pods.go:61] "kube-apiserver-pause-033460" [a8fc1125-4f29-447f-ad59-5d2332fcb764] Running
	I0316 00:04:54.931607  117977 system_pods.go:61] "kube-controller-manager-pause-033460" [d50419c8-a4d0-4d4a-974e-37b0a9d9e7ad] Running
	I0316 00:04:54.931612  117977 system_pods.go:61] "kube-proxy-zbw4r" [053cbe3c-45a9-44d2-b4a8-c98db95e8175] Running
	I0316 00:04:54.931616  117977 system_pods.go:61] "kube-scheduler-pause-033460" [5aa60435-9003-4902-9a39-7a3f263d5a3c] Running
	I0316 00:04:54.931624  117977 system_pods.go:74] duration metric: took 180.417319ms to wait for pod list to return data ...
	I0316 00:04:54.931633  117977 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:04:55.126073  117977 default_sa.go:45] found service account: "default"
	I0316 00:04:55.126108  117977 default_sa.go:55] duration metric: took 194.467332ms for default service account to be created ...
	I0316 00:04:55.126120  117977 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:04:55.334064  117977 system_pods.go:86] 6 kube-system pods found
	I0316 00:04:55.334100  117977 system_pods.go:89] "coredns-5dd5756b68-px5pk" [97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c] Running
	I0316 00:04:55.334108  117977 system_pods.go:89] "etcd-pause-033460" [02cd1ede-7926-43e9-9b4f-b507c75e5838] Running
	I0316 00:04:55.334122  117977 system_pods.go:89] "kube-apiserver-pause-033460" [a8fc1125-4f29-447f-ad59-5d2332fcb764] Running
	I0316 00:04:55.334130  117977 system_pods.go:89] "kube-controller-manager-pause-033460" [d50419c8-a4d0-4d4a-974e-37b0a9d9e7ad] Running
	I0316 00:04:55.334136  117977 system_pods.go:89] "kube-proxy-zbw4r" [053cbe3c-45a9-44d2-b4a8-c98db95e8175] Running
	I0316 00:04:55.334142  117977 system_pods.go:89] "kube-scheduler-pause-033460" [5aa60435-9003-4902-9a39-7a3f263d5a3c] Running
	I0316 00:04:55.334152  117977 system_pods.go:126] duration metric: took 208.024429ms to wait for k8s-apps to be running ...
	I0316 00:04:55.334164  117977 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:04:55.334219  117977 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:04:55.355853  117977 system_svc.go:56] duration metric: took 21.678177ms WaitForService to wait for kubelet
	I0316 00:04:55.355890  117977 kubeadm.go:576] duration metric: took 2.863998496s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:04:55.355914  117977 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:04:55.527731  117977 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:04:55.527760  117977 node_conditions.go:123] node cpu capacity is 2
	I0316 00:04:55.527770  117977 node_conditions.go:105] duration metric: took 171.849965ms to run NodePressure ...
	I0316 00:04:55.527782  117977 start.go:240] waiting for startup goroutines ...
	I0316 00:04:55.527789  117977 start.go:245] waiting for cluster config update ...
	I0316 00:04:55.527795  117977 start.go:254] writing updated cluster config ...
	I0316 00:04:55.528133  117977 ssh_runner.go:195] Run: rm -f paused
	I0316 00:04:55.580235  117977 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:04:55.582291  117977 out.go:177] * Done! kubectl is now configured to use "pause-033460" cluster and "default" namespace by default
	I0316 00:04:53.497774  118366 out.go:204]   - Generating certificates and keys ...
	I0316 00:04:53.497890  118366 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:04:53.497998  118366 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:04:53.596065  118366 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 00:04:53.723651  118366 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 00:04:53.900006  118366 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 00:04:54.075539  118366 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 00:04:54.438157  118366 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 00:04:54.438389  118366 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-982877 localhost] and IPs [192.168.72.185 127.0.0.1 ::1]
	I0316 00:04:54.637743  118366 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 00:04:54.637927  118366 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-982877 localhost] and IPs [192.168.72.185 127.0.0.1 ::1]
	I0316 00:04:54.689709  118366 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 00:04:55.028879  118366 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 00:04:55.374123  118366 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 00:04:55.374296  118366 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:04:55.472127  118366 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:04:55.655798  118366 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:04:56.135370  118366 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:04:56.280546  118366 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:04:56.281232  118366 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:04:56.284567  118366 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:04:56.286296  118366 out.go:204]   - Booting up control plane ...
	I0316 00:04:56.286424  118366 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:04:56.286524  118366 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:04:56.286842  118366 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:04:56.317243  118366 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:04:56.317358  118366 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:04:56.317423  118366 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:04:56.461122  118366 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:04:53.098749  118817 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:04:53.099372  118817 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:04:53.099405  118817 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:04:53.099310  118936 retry.go:31] will retry after 2.682598241s: waiting for machine to come up
	I0316 00:04:55.783106  118817 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | domain kubernetes-upgrade-209767 has defined MAC address 52:54:00:59:2d:2b in network mk-kubernetes-upgrade-209767
	I0316 00:04:55.783581  118817 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | unable to find current IP address of domain kubernetes-upgrade-209767 in network mk-kubernetes-upgrade-209767
	I0316 00:04:55.783609  118817 main.go:141] libmachine: (kubernetes-upgrade-209767) DBG | I0316 00:04:55.783524  118936 retry.go:31] will retry after 2.772272568s: waiting for machine to come up
	I0316 00:04:53.374134  119090 preload.go:132] Checking if preload exists for k8s version v1.24.1 and runtime crio
	I0316 00:04:53.374189  119090 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4
	I0316 00:04:53.374199  119090 cache.go:56] Caching tarball of preloaded images
	I0316 00:04:53.374319  119090 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:04:53.374389  119090 cache.go:59] Finished verifying existence of preloaded tar for v1.24.1 on crio
	I0316 00:04:53.374518  119090 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/stopped-upgrade-684927/config.json ...
	I0316 00:04:53.375885  119090 start.go:360] acquireMachinesLock for stopped-upgrade-684927: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	
	
	==> CRI-O <==
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.733317506Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547498733294396,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=395c1340-9ed9-46ca-9e71-fce7d186dd5e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.733996843Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3c01c5b7-97e4-4642-879c-af283d1b40ab name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.734050931Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3c01c5b7-97e4-4642-879c-af283d1b40ab name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.734299200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3c01c5b7-97e4-4642-879c-af283d1b40ab name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.777831204Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=744d521a-799a-4c50-b819-8a76eea17cbd name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.777928212Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=744d521a-799a-4c50-b819-8a76eea17cbd name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.779021016Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0283da9a-77ac-4850-b913-c2d4f2a15d50 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.779556156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547498779524598,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0283da9a-77ac-4850-b913-c2d4f2a15d50 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.780497531Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5c43a23a-3dae-42ca-b453-af6e56ebfbe3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.780574479Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5c43a23a-3dae-42ca-b453-af6e56ebfbe3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.780899706Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5c43a23a-3dae-42ca-b453-af6e56ebfbe3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.828027890Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3a87f0ca-c01f-4c34-8506-b55705a2c023 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.828123792Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3a87f0ca-c01f-4c34-8506-b55705a2c023 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.829678526Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94eb4f42-cc55-4b41-ad76-2b8d0f1d5d4c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.830061644Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547498830036373,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94eb4f42-cc55-4b41-ad76-2b8d0f1d5d4c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.830740134Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50f6d498-2dbf-4a45-b503-2fbc3c5b910b name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.830821834Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50f6d498-2dbf-4a45-b503-2fbc3c5b910b name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.831060513Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50f6d498-2dbf-4a45-b503-2fbc3c5b910b name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.878574254Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70e8a2b6-dafd-47c6-b028-10fe25d870a4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.878678230Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70e8a2b6-dafd-47c6-b028-10fe25d870a4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.879678001Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db4b4a95-c91e-406e-a7cc-2f314c153749 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.880050650Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710547498880027255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:116233,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db4b4a95-c91e-406e-a7cc-2f314c153749 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.880565538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=03a6c5a1-c455-4480-a99f-07b160c47e07 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.880640285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=03a6c5a1-c455-4480-a99f-07b160c47e07 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:04:58 pause-033460 crio[2288]: time="2024-03-16 00:04:58.881006144Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710547476333997791,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcffd2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessageP
ath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710547471680211851,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/terminat
ion-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710547471703683513,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev
/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710547471653031761,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.termina
tionMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710547471668633889,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy:
File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710547468956021940,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"contain
erPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046,PodSandboxId:5344a8bad2a6c498df5d0e7f8664ee57ca61a0f5e2e3c9508ffbf604aaf1a8f3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710547448009616450,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zbw4r,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 053cbe3c-45a9-44d2-b4a8-c98db95e8175,},Annotations:map[string]string{io.kubernetes.container.hash: e3dcff
d2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d,PodSandboxId:6679c749bfbc1a75d8bbe729f857b93a3088facc675ce2189b4e9b102600a581,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710547448251297165,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-px5pk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c,},Annotations:map[string]string{io.kubernetes.container.hash: 6211985c,io.kubernetes.container.ports
: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27,PodSandboxId:bd7c7f51f2f90f02747178cd1d6640c0523fef426f47930863d6b5958e320c87,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710547447990870658,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pau
se-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0d1fcd866ccfa85794889edddb21e06e,},Annotations:map[string]string{io.kubernetes.container.hash: 16975a52,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4,PodSandboxId:036c82fddf8056f9c5f82e64ad867fc4007c37e71cf9de8db6864f4863c17798,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710547447826716965,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-man
ager-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82957795aa183686e434dc94fc392117,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9,PodSandboxId:83711b8742f247a85d8a4a2b8c4f025f73e8cf8697687f2d4d8eb4583b082b05,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710547447918744297,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-033460,io.kubernetes.pod.namespace: ku
be-system,io.kubernetes.pod.uid: 51d218d4eaf28c753a6fa12dec3573b3,},Annotations:map[string]string{io.kubernetes.container.hash: c34e71b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979,PodSandboxId:54d9fc9480eb4604fa0dedb125ed3c3a42433ec34aef884d42e0e9f99bc2ce9a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710547447822089024,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-033460,io.kubernetes.pod.namespace: kube-system,io.kubern
etes.pod.uid: 25820ffdb94f7cf26bbd2d8c162c7954,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=03a6c5a1-c455-4480-a99f-07b160c47e07 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7b8a5530484e3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   22 seconds ago      Running             kube-proxy                2                   5344a8bad2a6c       kube-proxy-zbw4r
	cacdbc3b880cc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   27 seconds ago      Running             kube-controller-manager   2                   036c82fddf805       kube-controller-manager-pause-033460
	6774f685c0e50       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   27 seconds ago      Running             kube-scheduler            2                   54d9fc9480eb4       kube-scheduler-pause-033460
	00b89a48cb098       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   27 seconds ago      Running             kube-apiserver            2                   bd7c7f51f2f90       kube-apiserver-pause-033460
	99c11f33033d6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   27 seconds ago      Running             etcd                      2                   83711b8742f24       etcd-pause-033460
	5945c975bf56d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   30 seconds ago      Running             coredns                   2                   6679c749bfbc1       coredns-5dd5756b68-px5pk
	3e29e1aa5b91a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   50 seconds ago      Exited              coredns                   1                   6679c749bfbc1       coredns-5dd5756b68-px5pk
	5bdd14598b4a0       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   50 seconds ago      Exited              kube-proxy                1                   5344a8bad2a6c       kube-proxy-zbw4r
	c2ade6bc9d21f       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   50 seconds ago      Exited              kube-apiserver            1                   bd7c7f51f2f90       kube-apiserver-pause-033460
	5e8f5df1ebf95       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   51 seconds ago      Exited              etcd                      1                   83711b8742f24       etcd-pause-033460
	7b90ea0d40f58       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   51 seconds ago      Exited              kube-controller-manager   1                   036c82fddf805       kube-controller-manager-pause-033460
	d95cb14f612e5       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   51 seconds ago      Exited              kube-scheduler            1                   54d9fc9480eb4       kube-scheduler-pause-033460
	
	
	==> coredns [3e29e1aa5b91a7fd491f10d659984acce7525b1dcc27ab68c67c35530954d77d] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:59900 - 29446 "HINFO IN 226552963704854645.6329232445282991096. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.069001884s
	
	
	==> coredns [5945c975bf56de125e42a57b41fb39a9668708cee0deb77aa6f9801e2d7500f4] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:46008 - 27127 "HINFO IN 589809601669171669.1941158714043203276. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.078045559s
	
	
	==> describe nodes <==
	Name:               pause-033460
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-033460
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=pause-033460
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_02_56_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:02:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-033460
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:04:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:04:36 +0000   Sat, 16 Mar 2024 00:02:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.7
	  Hostname:    pause-033460
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015708Ki
	  pods:               110
	System Info:
	  Machine ID:                 17216e1a6d7b45a28f4aa9ce3d9fd455
	  System UUID:                17216e1a-6d7b-45a2-8f4a-a9ce3d9fd455
	  Boot ID:                    b62ffe3a-a73a-41b2-b057-7f3eda6fb76a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-px5pk                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     111s
	  kube-system                 etcd-pause-033460                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-apiserver-pause-033460             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-controller-manager-pause-033460    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-proxy-zbw4r                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-scheduler-pause-033460             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 109s               kube-proxy       
	  Normal   Starting                 22s                kube-proxy       
	  Normal   Starting                 47s                kube-proxy       
	  Normal   Starting                 2m4s               kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    2m3s               kubelet          Node pause-033460 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m3s               kubelet          Node pause-033460 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m3s               kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeReady                2m3s               kubelet          Node pause-033460 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  2m3s               kubelet          Node pause-033460 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           111s               node-controller  Node pause-033460 event: Registered Node pause-033460 in Controller
	  Warning  ContainerGCFailed        63s                kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 29s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  28s (x8 over 28s)  kubelet          Node pause-033460 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    28s (x8 over 28s)  kubelet          Node pause-033460 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     28s (x7 over 28s)  kubelet          Node pause-033460 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  28s                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           11s                node-controller  Node pause-033460 event: Registered Node pause-033460 in Controller
	
	
	==> dmesg <==
	[  +0.058833] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.060019] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.184085] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.128962] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.262769] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.211609] systemd-fstab-generator[756]: Ignoring "noauto" option for root device
	[  +0.065012] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.063661] systemd-fstab-generator[932]: Ignoring "noauto" option for root device
	[  +0.932475] kauditd_printk_skb: 50 callbacks suppressed
	[  +9.375421] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.093965] kauditd_printk_skb: 37 callbacks suppressed
	[Mar16 00:03] kauditd_printk_skb: 21 callbacks suppressed
	[  +0.088336] systemd-fstab-generator[1497]: Ignoring "noauto" option for root device
	[ +11.116541] kauditd_printk_skb: 80 callbacks suppressed
	[ +37.721782] systemd-fstab-generator[2211]: Ignoring "noauto" option for root device
	[  +0.134302] systemd-fstab-generator[2223]: Ignoring "noauto" option for root device
	[  +0.174911] systemd-fstab-generator[2237]: Ignoring "noauto" option for root device
	[  +0.140981] systemd-fstab-generator[2249]: Ignoring "noauto" option for root device
	[  +0.260714] systemd-fstab-generator[2273]: Ignoring "noauto" option for root device
	[Mar16 00:04] systemd-fstab-generator[2366]: Ignoring "noauto" option for root device
	[  +0.082514] kauditd_printk_skb: 100 callbacks suppressed
	[ +12.622676] kauditd_printk_skb: 83 callbacks suppressed
	[ +11.011416] systemd-fstab-generator[3196]: Ignoring "noauto" option for root device
	[  +5.729068] kauditd_printk_skb: 47 callbacks suppressed
	[ +16.451119] systemd-fstab-generator[3605]: Ignoring "noauto" option for root device
	
	
	==> etcd [5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9] <==
	{"level":"info","ts":"2024-03-16T00:04:08.793953Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:04:10.291716Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-16T00:04:10.291831Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-16T00:04:10.291877Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgPreVoteResp from 856b77cd5251110c at term 2"}
	{"level":"info","ts":"2024-03-16T00:04:10.291895Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became candidate at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.291902Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.291911Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.29192Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 3"}
	{"level":"info","ts":"2024-03-16T00:04:10.29923Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:10.299521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:10.300814Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-03-16T00:04:10.30083Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:04:10.300937Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:10.30098Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:10.299275Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:pause-033460 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:04:28.926613Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-16T00:04:28.926839Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"pause-033460","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"]}
	{"level":"warn","ts":"2024-03-16T00:04:28.927005Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-16T00:04:28.927077Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-16T00:04:28.928994Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.7:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-16T00:04:28.929063Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.7:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-16T00:04:28.930636Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"856b77cd5251110c","current-leader-member-id":"856b77cd5251110c"}
	{"level":"info","ts":"2024-03-16T00:04:28.934698Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-03-16T00:04:28.934806Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.7:2380"}
	{"level":"info","ts":"2024-03-16T00:04:28.934827Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"pause-033460","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.7:2380"],"advertise-client-urls":["https://192.168.50.7:2379"]}
	
	
	==> etcd [99c11f33033d65762f4ac1fe8ad5d0253ad8ec7dedf6c9827a0a5669ab8f6613] <==
	{"level":"info","ts":"2024-03-16T00:04:34.581023Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c received MsgVoteResp from 856b77cd5251110c at term 4"}
	{"level":"info","ts":"2024-03-16T00:04:34.581039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"856b77cd5251110c became leader at term 4"}
	{"level":"info","ts":"2024-03-16T00:04:34.581047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 856b77cd5251110c elected leader 856b77cd5251110c at term 4"}
	{"level":"info","ts":"2024-03-16T00:04:34.583527Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"856b77cd5251110c","local-member-attributes":"{Name:pause-033460 ClientURLs:[https://192.168.50.7:2379]}","request-path":"/0/members/856b77cd5251110c/attributes","cluster-id":"b162f841703ff885","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:04:34.583586Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:34.584629Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:04:34.584856Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:04:34.585809Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.7:2379"}
	{"level":"info","ts":"2024-03-16T00:04:34.603594Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:34.603746Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:04:51.709238Z","caller":"traceutil/trace.go:171","msg":"trace[236945713] linearizableReadLoop","detail":"{readStateIndex:582; appliedIndex:581; }","duration":"441.648644ms","start":"2024-03-16T00:04:51.267571Z","end":"2024-03-16T00:04:51.709219Z","steps":["trace[236945713] 'read index received'  (duration: 441.469727ms)","trace[236945713] 'applied index is now lower than readState.Index'  (duration: 178.125µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-16T00:04:51.709767Z","caller":"traceutil/trace.go:171","msg":"trace[227077965] transaction","detail":"{read_only:false; response_revision:538; number_of_response:1; }","duration":"525.061339ms","start":"2024-03-16T00:04:51.184692Z","end":"2024-03-16T00:04:51.709753Z","steps":["trace[227077965] 'process raft request'  (duration: 524.395745ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:04:51.709872Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"442.288235ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-033460\" ","response":"range_response_count:1 size:5258"}
	{"level":"info","ts":"2024-03-16T00:04:51.709975Z","caller":"traceutil/trace.go:171","msg":"trace[89413010] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-033460; range_end:; response_count:1; response_revision:538; }","duration":"442.432355ms","start":"2024-03-16T00:04:51.267533Z","end":"2024-03-16T00:04:51.709966Z","steps":["trace[89413010] 'agreement among raft nodes before linearized reading'  (duration: 442.259823ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:04:51.710003Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:04:51.267479Z","time spent":"442.516265ms","remote":"127.0.0.1:33010","response type":"/etcdserverpb.KV/Range","request count":0,"request size":46,"response count":1,"response size":5282,"request content":"key:\"/registry/pods/kube-system/etcd-pause-033460\" "}
	{"level":"warn","ts":"2024-03-16T00:04:51.710277Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.390029ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-16T00:04:51.710372Z","caller":"traceutil/trace.go:171","msg":"trace[1723036101] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:538; }","duration":"228.43613ms","start":"2024-03-16T00:04:51.481881Z","end":"2024-03-16T00:04:51.710317Z","steps":["trace[1723036101] 'agreement among raft nodes before linearized reading'  (duration: 228.376628ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:04:51.71055Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:04:51.184671Z","time spent":"525.21706ms","remote":"127.0.0.1:33010","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5243,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-033460\" mod_revision:475 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-033460\" value_size:5191 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-pause-033460\" > >"}
	{"level":"warn","ts":"2024-03-16T00:04:52.347684Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"369.16991ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1228513223541496357 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" value_size:6085 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-16T00:04:52.348288Z","caller":"traceutil/trace.go:171","msg":"trace[1708351653] linearizableReadLoop","detail":"{readStateIndex:583; appliedIndex:582; }","duration":"119.630192ms","start":"2024-03-16T00:04:52.228626Z","end":"2024-03-16T00:04:52.348256Z","steps":["trace[1708351653] 'read index received'  (duration: 49.631µs)","trace[1708351653] 'applied index is now lower than readState.Index'  (duration: 119.579206ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:04:52.348509Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"119.912822ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" ","response":"range_response_count:1 size:6171"}
	{"level":"info","ts":"2024-03-16T00:04:52.34865Z","caller":"traceutil/trace.go:171","msg":"trace[1857772743] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-pause-033460; range_end:; response_count:1; response_revision:539; }","duration":"120.065089ms","start":"2024-03-16T00:04:52.228576Z","end":"2024-03-16T00:04:52.348641Z","steps":["trace[1857772743] 'agreement among raft nodes before linearized reading'  (duration: 119.81778ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:04:52.348307Z","caller":"traceutil/trace.go:171","msg":"trace[2029435046] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"619.316814ms","start":"2024-03-16T00:04:51.728961Z","end":"2024-03-16T00:04:52.348278Z","steps":["trace[2029435046] 'process raft request'  (duration: 248.537337ms)","trace[2029435046] 'compare'  (duration: 369.029093ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:04:52.351449Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:04:51.72895Z","time spent":"622.213504ms","remote":"127.0.0.1:33010","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6156,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" mod_revision:469 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" value_size:6085 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-033460\" > >"}
	{"level":"warn","ts":"2024-03-16T00:04:53.004723Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"343.058711ms","expected-duration":"100ms","prefix":"","request":"header:<ID:1228513223541496364 > lease_revoke:<id:110c8e44930f9918>","response":"size:29"}
	
	
	==> kernel <==
	 00:04:59 up 2 min,  0 users,  load average: 0.62, 0.29, 0.11
	Linux pause-033460 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [00b89a48cb0981c0bd7ca6055ec49e230712552be8b3e2cdd840eb4ed7a35194] <==
	I0316 00:04:36.103033       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0316 00:04:36.113730       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0316 00:04:36.118600       1 aggregator.go:166] initial CRD sync complete...
	I0316 00:04:36.118819       1 autoregister_controller.go:141] Starting autoregister controller
	I0316 00:04:36.118906       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0316 00:04:36.118941       1 cache.go:39] Caches are synced for autoregister controller
	I0316 00:04:36.905953       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0316 00:04:37.143671       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.50.7]
	I0316 00:04:37.144966       1 controller.go:624] quota admission added evaluator for: endpoints
	I0316 00:04:37.149497       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0316 00:04:37.642724       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0316 00:04:37.654599       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0316 00:04:37.700015       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0316 00:04:37.732213       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0316 00:04:37.739099       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0316 00:04:51.713902       1 trace.go:236] Trace[1239299117]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:c7e86df4-beaf-407a-8fcf-1710601ca75f,client:192.168.50.7,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/etcd-pause-033460/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (16-Mar-2024 00:04:51.180) (total time: 532ms):
	Trace[1239299117]: ["GuaranteedUpdate etcd3" audit-id:c7e86df4-beaf-407a-8fcf-1710601ca75f,key:/pods/kube-system/etcd-pause-033460,type:*core.Pod,resource:pods 532ms (00:04:51.181)
	Trace[1239299117]:  ---"Txn call completed" 527ms (00:04:51.711)]
	Trace[1239299117]: ---"Object stored in database" 528ms (00:04:51.711)
	Trace[1239299117]: [532.669569ms] [532.669569ms] END
	I0316 00:04:52.356768       1 trace.go:236] Trace[85279964]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:eedaf045-5b06-4beb-9a64-965e4de0a538,client:192.168.50.7,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods/kube-controller-manager-pause-033460/status,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PATCH (16-Mar-2024 00:04:51.723) (total time: 632ms):
	Trace[85279964]: ["GuaranteedUpdate etcd3" audit-id:eedaf045-5b06-4beb-9a64-965e4de0a538,key:/pods/kube-system/kube-controller-manager-pause-033460,type:*core.Pod,resource:pods 632ms (00:04:51.724)
	Trace[85279964]:  ---"Txn call completed" 627ms (00:04:52.355)]
	Trace[85279964]: ---"Object stored in database" 628ms (00:04:52.355)
	Trace[85279964]: [632.728595ms] [632.728595ms] END
	
	
	==> kube-apiserver [c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27] <==
	I0316 00:04:18.846654       1 gc_controller.go:91] Shutting down apiserver lease garbage collector
	I0316 00:04:18.846663       1 controller.go:129] Ending legacy_token_tracking_controller
	I0316 00:04:18.846666       1 controller.go:130] Shutting down legacy_token_tracking_controller
	I0316 00:04:18.846677       1 crdregistration_controller.go:142] Shutting down crd-autoregister controller
	I0316 00:04:18.846692       1 available_controller.go:439] Shutting down AvailableConditionController
	I0316 00:04:18.846705       1 customresource_discovery_controller.go:325] Shutting down DiscoveryController
	I0316 00:04:18.846719       1 apf_controller.go:384] Shutting down API Priority and Fairness config worker
	I0316 00:04:18.846728       1 cluster_authentication_trust_controller.go:463] Shutting down cluster_authentication_trust_controller controller
	I0316 00:04:18.847091       1 dynamic_serving_content.go:146] "Shutting down controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0316 00:04:18.847306       1 object_count_tracker.go:151] "StorageObjectCountTracker pruner is exiting"
	I0316 00:04:18.847443       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0316 00:04:18.847590       1 dynamic_cafile_content.go:171] "Shutting down controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0316 00:04:18.847691       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0316 00:04:18.847717       1 controller.go:86] Shutting down OpenAPI V3 AggregationController
	I0316 00:04:18.847726       1 controller.go:84] Shutting down OpenAPI AggregationController
	I0316 00:04:18.847759       1 controller.go:159] Shutting down quota evaluator
	I0316 00:04:18.847766       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.849245       1 secure_serving.go:258] Stopped listening on [::]:8443
	I0316 00:04:18.849292       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0316 00:04:18.851405       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0316 00:04:18.851470       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0316 00:04:18.851501       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.851515       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.851521       1 controller.go:178] quota evaluator worker shutdown
	I0316 00:04:18.851525       1 controller.go:178] quota evaluator worker shutdown
	
	
	==> kube-controller-manager [7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4] <==
	I0316 00:04:13.762544       1 shared_informer.go:311] Waiting for caches to sync for stateful set
	I0316 00:04:13.764267       1 controllermanager.go:642] "Started controller" controller="cronjob-controller"
	I0316 00:04:13.764562       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2"
	I0316 00:04:13.764594       1 shared_informer.go:311] Waiting for caches to sync for cronjob
	I0316 00:04:13.767162       1 controllermanager.go:642] "Started controller" controller="replicationcontroller-controller"
	I0316 00:04:13.767399       1 replica_set.go:214] "Starting controller" name="replicationcontroller"
	I0316 00:04:13.767464       1 shared_informer.go:311] Waiting for caches to sync for ReplicationController
	I0316 00:04:13.770517       1 controllermanager.go:642] "Started controller" controller="job-controller"
	I0316 00:04:13.770633       1 job_controller.go:226] "Starting job controller"
	I0316 00:04:13.770672       1 shared_informer.go:311] Waiting for caches to sync for job
	I0316 00:04:13.773964       1 controllermanager.go:642] "Started controller" controller="ttl-after-finished-controller"
	I0316 00:04:13.774030       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller"
	I0316 00:04:13.774219       1 shared_informer.go:311] Waiting for caches to sync for TTL after finished
	I0316 00:04:13.776956       1 controllermanager.go:642] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0316 00:04:13.777266       1 publisher.go:102] "Starting root CA cert publisher controller"
	I0316 00:04:13.777303       1 shared_informer.go:311] Waiting for caches to sync for crt configmap
	I0316 00:04:13.781405       1 controllermanager.go:642] "Started controller" controller="ttl-controller"
	I0316 00:04:13.781519       1 ttl_controller.go:124] "Starting TTL controller"
	I0316 00:04:13.781697       1 shared_informer.go:311] Waiting for caches to sync for TTL
	I0316 00:04:13.784403       1 shared_informer.go:318] Caches are synced for tokens
	W0316 00:04:23.786035       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	W0316 00:04:24.287097       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	W0316 00:04:25.288977       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	W0316 00:04:27.289679       1 client_builder_dynamic.go:197] get or create service account failed: Get "https://192.168.50.7:8443/api/v1/namespaces/kube-system/serviceaccounts/node-controller": dial tcp 192.168.50.7:8443: connect: connection refused
	E0316 00:04:27.289875       1 cidr_allocator.go:156] "Failed to list all nodes" err="Get \"https://192.168.50.7:8443/api/v1/nodes\": failed to get token for kube-system/node-controller: timed out waiting for the condition"
	
	
	==> kube-controller-manager [cacdbc3b880cc00ad0cbad3345af1e53878840ebf2d5669cda2f22f3416a8a34] <==
	I0316 00:04:48.579673       1 taint_manager.go:210] "Sending events to api server"
	I0316 00:04:48.579568       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0316 00:04:48.579946       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-033460"
	I0316 00:04:48.580078       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0316 00:04:48.580164       1 event.go:307] "Event occurred" object="pause-033460" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-033460 event: Registered Node pause-033460 in Controller"
	I0316 00:04:48.582639       1 shared_informer.go:318] Caches are synced for GC
	I0316 00:04:48.583892       1 shared_informer.go:318] Caches are synced for PV protection
	I0316 00:04:48.588985       1 shared_informer.go:318] Caches are synced for HPA
	I0316 00:04:48.589256       1 shared_informer.go:318] Caches are synced for expand
	I0316 00:04:48.596148       1 shared_informer.go:318] Caches are synced for job
	I0316 00:04:48.616134       1 shared_informer.go:318] Caches are synced for TTL
	I0316 00:04:48.616417       1 shared_informer.go:318] Caches are synced for resource quota
	I0316 00:04:48.621997       1 shared_informer.go:318] Caches are synced for deployment
	I0316 00:04:48.631446       1 shared_informer.go:318] Caches are synced for disruption
	I0316 00:04:48.659813       1 shared_informer.go:318] Caches are synced for resource quota
	I0316 00:04:48.675186       1 shared_informer.go:318] Caches are synced for daemon sets
	I0316 00:04:48.685408       1 shared_informer.go:318] Caches are synced for stateful set
	I0316 00:04:48.700939       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-serving
	I0316 00:04:48.701217       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0316 00:04:48.702495       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-kubelet-client
	I0316 00:04:48.702588       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0316 00:04:48.732560       1 shared_informer.go:318] Caches are synced for certificate-csrapproving
	I0316 00:04:49.129552       1 shared_informer.go:318] Caches are synced for garbage collector
	I0316 00:04:49.129609       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0316 00:04:49.135528       1 shared_informer.go:318] Caches are synced for garbage collector
	
	
	==> kube-proxy [5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046] <==
	I0316 00:04:09.508213       1 server_others.go:69] "Using iptables proxy"
	I0316 00:04:11.744985       1 node.go:141] Successfully retrieved node IP: 192.168.50.7
	I0316 00:04:11.808449       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:04:11.808492       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:04:11.811292       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:04:11.811439       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:04:11.811688       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:04:11.811718       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:11.813453       1 config.go:188] "Starting service config controller"
	I0316 00:04:11.813505       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:04:11.813531       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:04:11.813555       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:04:11.818091       1 config.go:315] "Starting node config controller"
	I0316 00:04:11.818136       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:04:11.915920       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:04:11.916089       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:04:11.919187       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [7b8a5530484e337a0172c6c44b3909a8b032feb87fe90e8177d8fad57aea9c4b] <==
	I0316 00:04:36.455557       1 server_others.go:69] "Using iptables proxy"
	I0316 00:04:36.467650       1 node.go:141] Successfully retrieved node IP: 192.168.50.7
	I0316 00:04:36.505956       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:04:36.505979       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:04:36.508602       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:04:36.508678       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:04:36.508958       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:04:36.509145       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:36.510019       1 config.go:188] "Starting service config controller"
	I0316 00:04:36.510089       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:04:36.510123       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:04:36.510139       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:04:36.510705       1 config.go:315] "Starting node config controller"
	I0316 00:04:36.510892       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:04:36.610186       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:04:36.610271       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:04:36.611723       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6774f685c0e50beeed8aedb826ecfbe374d4f1d72e8082f63a035d882008a43f] <==
	I0316 00:04:32.913748       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:04:35.974961       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:04:35.975013       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:04:35.975024       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:04:35.975031       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:04:36.043992       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:04:36.044932       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:36.052083       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:04:36.052437       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:04:36.053618       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:04:36.052494       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:04:36.154684       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979] <==
	I0316 00:04:09.620432       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:04:11.676114       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:04:11.676272       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:04:11.676305       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:04:11.676414       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:04:11.733797       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:04:11.734043       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:04:11.739539       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:04:11.741551       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:04:11.741668       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:04:11.741854       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:04:11.842513       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:04:29.299265       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0316 00:04:29.299474       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0316 00:04:29.299774       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.626590    3203 scope.go:117] "RemoveContainer" containerID="5e8f5df1ebf9586ae07d96cb026ddfebdef5cc8c016a61cfc21847d9fff8c3c9"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.628617    3203 scope.go:117] "RemoveContainer" containerID="c2ade6bc9d21fe46ab0570a929a1c7c31b3f1a420123f0de2869ae56c9ae0d27"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.630752    3203 scope.go:117] "RemoveContainer" containerID="7b90ea0d40f5857ad513b5baf58c544b83e7a655e663dc77c710c9b756a46ac4"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.631841    3203 scope.go:117] "RemoveContainer" containerID="d95cb14f612e59900a0eb9ad972a85728aa36c70fc4dc362f51c497ac354e979"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: I0316 00:04:31.716146    3203 kubelet_node_status.go:70] "Attempting to register node" node="pause-033460"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: E0316 00:04:31.717406    3203 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.50.7:8443: connect: connection refused" node="pause-033460"
	Mar 16 00:04:31 pause-033460 kubelet[3203]: W0316 00:04:31.964015    3203 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:31 pause-033460 kubelet[3203]: E0316 00:04:31.964091    3203 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:31 pause-033460 kubelet[3203]: W0316 00:04:31.986900    3203 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-033460&limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:31 pause-033460 kubelet[3203]: E0316 00:04:31.986974    3203 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)pause-033460&limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:32 pause-033460 kubelet[3203]: W0316 00:04:32.008704    3203 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:32 pause-033460 kubelet[3203]: E0316 00:04:32.008777    3203 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.7:8443: connect: connection refused
	Mar 16 00:04:32 pause-033460 kubelet[3203]: I0316 00:04:32.519759    3203 kubelet_node_status.go:70] "Attempting to register node" node="pause-033460"
	Mar 16 00:04:35 pause-033460 kubelet[3203]: I0316 00:04:35.977874    3203 apiserver.go:52] "Watching apiserver"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.014056    3203 topology_manager.go:215] "Topology Admit Handler" podUID="97c2191a-dd00-4cce-b8e9-e7ee99ac5d0c" podNamespace="kube-system" podName="coredns-5dd5756b68-px5pk"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.014875    3203 topology_manager.go:215] "Topology Admit Handler" podUID="053cbe3c-45a9-44d2-b4a8-c98db95e8175" podNamespace="kube-system" podName="kube-proxy-zbw4r"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.089660    3203 kubelet_node_status.go:108] "Node was previously registered" node="pause-033460"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.089903    3203 kubelet_node_status.go:73] "Successfully registered node" node="pause-033460"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.091215    3203 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.092108    3203 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.103783    3203 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: E0316 00:04:36.160746    3203 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-pause-033460\" already exists" pod="kube-system/kube-scheduler-pause-033460"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.203676    3203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/053cbe3c-45a9-44d2-b4a8-c98db95e8175-lib-modules\") pod \"kube-proxy-zbw4r\" (UID: \"053cbe3c-45a9-44d2-b4a8-c98db95e8175\") " pod="kube-system/kube-proxy-zbw4r"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.203764    3203 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/053cbe3c-45a9-44d2-b4a8-c98db95e8175-xtables-lock\") pod \"kube-proxy-zbw4r\" (UID: \"053cbe3c-45a9-44d2-b4a8-c98db95e8175\") " pod="kube-system/kube-proxy-zbw4r"
	Mar 16 00:04:36 pause-033460 kubelet[3203]: I0316 00:04:36.315962    3203 scope.go:117] "RemoveContainer" containerID="5bdd14598b4a043525237805c4317e5c4b9fb698a150b30cf158fb2799e22046"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-033460 -n pause-033460
helpers_test.go:261: (dbg) Run:  kubectl --context pause-033460 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (69.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (284.67s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-402923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-402923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m44.365043497s)

                                                
                                                
-- stdout --
	* [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:06:19.154878  120517 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:06:19.154991  120517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:06:19.155000  120517 out.go:304] Setting ErrFile to fd 2...
	I0316 00:06:19.155004  120517 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:06:19.155208  120517 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:06:19.155928  120517 out.go:298] Setting JSON to false
	I0316 00:06:19.157019  120517 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10129,"bootTime":1710537450,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:06:19.157082  120517 start.go:139] virtualization: kvm guest
	I0316 00:06:19.159444  120517 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:06:19.160819  120517 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:06:19.161016  120517 notify.go:220] Checking for updates...
	I0316 00:06:19.162113  120517 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:06:19.163255  120517 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:06:19.164468  120517 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:06:19.165763  120517 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:06:19.166985  120517 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:06:19.168785  120517 config.go:182] Loaded profile config "cert-expiration-982877": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:06:19.168932  120517 config.go:182] Loaded profile config "cert-options-313368": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:06:19.169044  120517 config.go:182] Loaded profile config "kubernetes-upgrade-209767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:06:19.169173  120517 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:06:19.210267  120517 out.go:177] * Using the kvm2 driver based on user configuration
	I0316 00:06:19.211559  120517 start.go:297] selected driver: kvm2
	I0316 00:06:19.211579  120517 start.go:901] validating driver "kvm2" against <nil>
	I0316 00:06:19.211594  120517 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:06:19.212656  120517 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:06:19.212744  120517 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:06:19.227761  120517 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:06:19.227802  120517 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 00:06:19.228005  120517 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:06:19.228065  120517 cni.go:84] Creating CNI manager for ""
	I0316 00:06:19.228077  120517 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:06:19.228085  120517 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 00:06:19.228140  120517 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:06:19.228225  120517 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:06:19.229959  120517 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:06:19.231157  120517 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:06:19.231191  120517 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:06:19.231201  120517 cache.go:56] Caching tarball of preloaded images
	I0316 00:06:19.231286  120517 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:06:19.231300  120517 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:06:19.231429  120517 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:06:19.231453  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json: {Name:mkdb7a6d9b30d2c855c641324acec232649ebbe2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:06:19.231607  120517 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:06:32.296521  120517 start.go:364] duration metric: took 13.064885629s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:06:32.296590  120517 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:06:32.296757  120517 start.go:125] createHost starting for "" (driver="kvm2")
	I0316 00:06:32.299042  120517 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0316 00:06:32.299243  120517 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:06:32.299290  120517 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:06:32.318632  120517 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35667
	I0316 00:06:32.319114  120517 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:06:32.319843  120517 main.go:141] libmachine: Using API Version  1
	I0316 00:06:32.319873  120517 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:06:32.320207  120517 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:06:32.320425  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:06:32.320609  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:32.320813  120517 start.go:159] libmachine.API.Create for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:06:32.320845  120517 client.go:168] LocalClient.Create starting
	I0316 00:06:32.320880  120517 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0316 00:06:32.320929  120517 main.go:141] libmachine: Decoding PEM data...
	I0316 00:06:32.320954  120517 main.go:141] libmachine: Parsing certificate...
	I0316 00:06:32.321018  120517 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0316 00:06:32.321038  120517 main.go:141] libmachine: Decoding PEM data...
	I0316 00:06:32.321057  120517 main.go:141] libmachine: Parsing certificate...
	I0316 00:06:32.321073  120517 main.go:141] libmachine: Running pre-create checks...
	I0316 00:06:32.321082  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .PreCreateCheck
	I0316 00:06:32.321475  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:06:32.321871  120517 main.go:141] libmachine: Creating machine...
	I0316 00:06:32.321886  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .Create
	I0316 00:06:32.321983  120517 main.go:141] libmachine: (old-k8s-version-402923) Creating KVM machine...
	I0316 00:06:32.323197  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found existing default KVM network
	I0316 00:06:32.324673  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:32.324457  120798 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015e30}
	I0316 00:06:32.330588  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | trying to create private KVM network mk-old-k8s-version-402923 192.168.39.0/24...
	I0316 00:06:32.399265  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | private KVM network mk-old-k8s-version-402923 192.168.39.0/24 created
	I0316 00:06:32.399299  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:32.399221  120798 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:06:32.399314  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923 ...
	I0316 00:06:32.399355  120517 main.go:141] libmachine: (old-k8s-version-402923) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0316 00:06:32.399430  120517 main.go:141] libmachine: (old-k8s-version-402923) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0316 00:06:32.650493  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:32.650376  120798 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa...
	I0316 00:06:32.966422  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:32.966296  120798 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/old-k8s-version-402923.rawdisk...
	I0316 00:06:32.966453  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Writing magic tar header
	I0316 00:06:32.966482  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Writing SSH key tar header
	I0316 00:06:32.966513  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:32.966461  120798 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923 ...
	I0316 00:06:32.966645  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923
	I0316 00:06:32.966680  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0316 00:06:32.966696  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923 (perms=drwx------)
	I0316 00:06:32.966715  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0316 00:06:32.966727  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0316 00:06:32.966741  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0316 00:06:32.966758  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0316 00:06:32.966788  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:06:32.966810  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0316 00:06:32.966822  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0316 00:06:32.966834  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home/jenkins
	I0316 00:06:32.966855  120517 main.go:141] libmachine: (old-k8s-version-402923) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0316 00:06:32.966877  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Checking permissions on dir: /home
	I0316 00:06:32.966887  120517 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:06:32.966902  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Skipping /home - not owner
	I0316 00:06:32.968131  120517 main.go:141] libmachine: (old-k8s-version-402923) define libvirt domain using xml: 
	I0316 00:06:32.968155  120517 main.go:141] libmachine: (old-k8s-version-402923) <domain type='kvm'>
	I0316 00:06:32.968166  120517 main.go:141] libmachine: (old-k8s-version-402923)   <name>old-k8s-version-402923</name>
	I0316 00:06:32.968174  120517 main.go:141] libmachine: (old-k8s-version-402923)   <memory unit='MiB'>2200</memory>
	I0316 00:06:32.968182  120517 main.go:141] libmachine: (old-k8s-version-402923)   <vcpu>2</vcpu>
	I0316 00:06:32.968205  120517 main.go:141] libmachine: (old-k8s-version-402923)   <features>
	I0316 00:06:32.968218  120517 main.go:141] libmachine: (old-k8s-version-402923)     <acpi/>
	I0316 00:06:32.968224  120517 main.go:141] libmachine: (old-k8s-version-402923)     <apic/>
	I0316 00:06:32.968231  120517 main.go:141] libmachine: (old-k8s-version-402923)     <pae/>
	I0316 00:06:32.968238  120517 main.go:141] libmachine: (old-k8s-version-402923)     
	I0316 00:06:32.968247  120517 main.go:141] libmachine: (old-k8s-version-402923)   </features>
	I0316 00:06:32.968262  120517 main.go:141] libmachine: (old-k8s-version-402923)   <cpu mode='host-passthrough'>
	I0316 00:06:32.968273  120517 main.go:141] libmachine: (old-k8s-version-402923)   
	I0316 00:06:32.968285  120517 main.go:141] libmachine: (old-k8s-version-402923)   </cpu>
	I0316 00:06:32.968298  120517 main.go:141] libmachine: (old-k8s-version-402923)   <os>
	I0316 00:06:32.968307  120517 main.go:141] libmachine: (old-k8s-version-402923)     <type>hvm</type>
	I0316 00:06:32.968322  120517 main.go:141] libmachine: (old-k8s-version-402923)     <boot dev='cdrom'/>
	I0316 00:06:32.968329  120517 main.go:141] libmachine: (old-k8s-version-402923)     <boot dev='hd'/>
	I0316 00:06:32.968337  120517 main.go:141] libmachine: (old-k8s-version-402923)     <bootmenu enable='no'/>
	I0316 00:06:32.968343  120517 main.go:141] libmachine: (old-k8s-version-402923)   </os>
	I0316 00:06:32.968351  120517 main.go:141] libmachine: (old-k8s-version-402923)   <devices>
	I0316 00:06:32.968359  120517 main.go:141] libmachine: (old-k8s-version-402923)     <disk type='file' device='cdrom'>
	I0316 00:06:32.968374  120517 main.go:141] libmachine: (old-k8s-version-402923)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/boot2docker.iso'/>
	I0316 00:06:32.968387  120517 main.go:141] libmachine: (old-k8s-version-402923)       <target dev='hdc' bus='scsi'/>
	I0316 00:06:32.968403  120517 main.go:141] libmachine: (old-k8s-version-402923)       <readonly/>
	I0316 00:06:32.968417  120517 main.go:141] libmachine: (old-k8s-version-402923)     </disk>
	I0316 00:06:32.968429  120517 main.go:141] libmachine: (old-k8s-version-402923)     <disk type='file' device='disk'>
	I0316 00:06:32.968440  120517 main.go:141] libmachine: (old-k8s-version-402923)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0316 00:06:32.968458  120517 main.go:141] libmachine: (old-k8s-version-402923)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/old-k8s-version-402923.rawdisk'/>
	I0316 00:06:32.968467  120517 main.go:141] libmachine: (old-k8s-version-402923)       <target dev='hda' bus='virtio'/>
	I0316 00:06:32.968486  120517 main.go:141] libmachine: (old-k8s-version-402923)     </disk>
	I0316 00:06:32.968502  120517 main.go:141] libmachine: (old-k8s-version-402923)     <interface type='network'>
	I0316 00:06:32.968511  120517 main.go:141] libmachine: (old-k8s-version-402923)       <source network='mk-old-k8s-version-402923'/>
	I0316 00:06:32.968517  120517 main.go:141] libmachine: (old-k8s-version-402923)       <model type='virtio'/>
	I0316 00:06:32.968532  120517 main.go:141] libmachine: (old-k8s-version-402923)     </interface>
	I0316 00:06:32.968539  120517 main.go:141] libmachine: (old-k8s-version-402923)     <interface type='network'>
	I0316 00:06:32.968555  120517 main.go:141] libmachine: (old-k8s-version-402923)       <source network='default'/>
	I0316 00:06:32.968569  120517 main.go:141] libmachine: (old-k8s-version-402923)       <model type='virtio'/>
	I0316 00:06:32.968581  120517 main.go:141] libmachine: (old-k8s-version-402923)     </interface>
	I0316 00:06:32.968589  120517 main.go:141] libmachine: (old-k8s-version-402923)     <serial type='pty'>
	I0316 00:06:32.968598  120517 main.go:141] libmachine: (old-k8s-version-402923)       <target port='0'/>
	I0316 00:06:32.968612  120517 main.go:141] libmachine: (old-k8s-version-402923)     </serial>
	I0316 00:06:32.968621  120517 main.go:141] libmachine: (old-k8s-version-402923)     <console type='pty'>
	I0316 00:06:32.968629  120517 main.go:141] libmachine: (old-k8s-version-402923)       <target type='serial' port='0'/>
	I0316 00:06:32.968665  120517 main.go:141] libmachine: (old-k8s-version-402923)     </console>
	I0316 00:06:32.968690  120517 main.go:141] libmachine: (old-k8s-version-402923)     <rng model='virtio'>
	I0316 00:06:32.968704  120517 main.go:141] libmachine: (old-k8s-version-402923)       <backend model='random'>/dev/random</backend>
	I0316 00:06:32.968712  120517 main.go:141] libmachine: (old-k8s-version-402923)     </rng>
	I0316 00:06:32.968722  120517 main.go:141] libmachine: (old-k8s-version-402923)     
	I0316 00:06:32.968746  120517 main.go:141] libmachine: (old-k8s-version-402923)     
	I0316 00:06:32.968773  120517 main.go:141] libmachine: (old-k8s-version-402923)   </devices>
	I0316 00:06:32.968791  120517 main.go:141] libmachine: (old-k8s-version-402923) </domain>
	I0316 00:06:32.968799  120517 main.go:141] libmachine: (old-k8s-version-402923) 
	I0316 00:06:32.973218  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:8b:19:2a in network default
	I0316 00:06:32.973921  120517 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:06:32.973948  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:32.974757  120517 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:06:32.975104  120517 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:06:32.975708  120517 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:06:32.976585  120517 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:06:34.300640  120517 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:06:34.301573  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:34.302054  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:34.302094  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:34.302038  120798 retry.go:31] will retry after 291.989094ms: waiting for machine to come up
	I0316 00:06:34.595834  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:34.596444  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:34.596499  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:34.596407  120798 retry.go:31] will retry after 245.39827ms: waiting for machine to come up
	I0316 00:06:34.844033  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:34.844540  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:34.844565  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:34.844496  120798 retry.go:31] will retry after 451.667552ms: waiting for machine to come up
	I0316 00:06:35.298412  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:35.298905  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:35.298941  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:35.298850  120798 retry.go:31] will retry after 448.805662ms: waiting for machine to come up
	I0316 00:06:35.750003  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:35.750460  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:35.750497  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:35.750408  120798 retry.go:31] will retry after 603.661067ms: waiting for machine to come up
	I0316 00:06:36.355121  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:36.355556  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:36.355583  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:36.355507  120798 retry.go:31] will retry after 699.589708ms: waiting for machine to come up
	I0316 00:06:37.056464  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:37.057111  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:37.057145  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:37.057049  120798 retry.go:31] will retry after 804.683956ms: waiting for machine to come up
	I0316 00:06:37.863549  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:37.863976  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:37.863996  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:37.863957  120798 retry.go:31] will retry after 1.420003056s: waiting for machine to come up
	I0316 00:06:39.285412  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:39.285970  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:39.285999  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:39.285923  120798 retry.go:31] will retry after 1.553432194s: waiting for machine to come up
	I0316 00:06:40.842088  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:40.842651  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:40.842681  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:40.842600  120798 retry.go:31] will retry after 1.479864987s: waiting for machine to come up
	I0316 00:06:42.323940  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:42.324372  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:42.324398  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:42.324343  120798 retry.go:31] will retry after 2.688971456s: waiting for machine to come up
	I0316 00:06:45.015535  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:45.016115  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:45.016143  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:45.016064  120798 retry.go:31] will retry after 3.45172657s: waiting for machine to come up
	I0316 00:06:48.469967  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:48.470627  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:48.470675  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:48.470601  120798 retry.go:31] will retry after 3.585583111s: waiting for machine to come up
	I0316 00:06:52.060308  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:52.060764  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:06:52.060782  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:06:52.060737  120798 retry.go:31] will retry after 3.733293602s: waiting for machine to come up
	I0316 00:06:55.798157  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:55.798651  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:55.798680  120517 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:06:55.798726  120517 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:06:55.799221  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923
	I0316 00:06:55.875299  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:06:55.875351  120517 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:06:55.875366  120517 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:06:55.877903  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:55.878378  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:minikube Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:55.878424  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:55.878526  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:06:55.878559  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:06:55.878596  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:06:55.878612  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:06:55.878637  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:06:56.011149  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:06:56.011473  120517 main.go:141] libmachine: (old-k8s-version-402923) KVM machine creation complete!
	I0316 00:06:56.011798  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:06:56.012517  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:56.012735  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:56.012943  120517 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0316 00:06:56.012960  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:06:56.014204  120517 main.go:141] libmachine: Detecting operating system of created instance...
	I0316 00:06:56.014219  120517 main.go:141] libmachine: Waiting for SSH to be available...
	I0316 00:06:56.014225  120517 main.go:141] libmachine: Getting to WaitForSSH function...
	I0316 00:06:56.014232  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.016623  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.017013  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.017058  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.017212  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:56.017381  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.017527  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.017637  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:56.017852  120517 main.go:141] libmachine: Using SSH client type: native
	I0316 00:06:56.018100  120517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:06:56.018117  120517 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0316 00:06:56.126514  120517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:06:56.126546  120517 main.go:141] libmachine: Detecting the provisioner...
	I0316 00:06:56.126556  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.129289  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.129677  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.129715  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.129793  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:56.130009  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.130179  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.130297  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:56.130460  120517 main.go:141] libmachine: Using SSH client type: native
	I0316 00:06:56.130631  120517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:06:56.130642  120517 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0316 00:06:56.244330  120517 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0316 00:06:56.244424  120517 main.go:141] libmachine: found compatible host: buildroot
	I0316 00:06:56.244435  120517 main.go:141] libmachine: Provisioning with buildroot...
	I0316 00:06:56.244443  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:06:56.244706  120517 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:06:56.244736  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:06:56.244917  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.247565  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.247853  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.247894  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.248035  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:56.248214  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.248349  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.248478  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:56.248638  120517 main.go:141] libmachine: Using SSH client type: native
	I0316 00:06:56.248868  120517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:06:56.248888  120517 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:06:56.369647  120517 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:06:56.369698  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.372441  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.372796  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.372824  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.373019  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:56.373222  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.373411  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.373560  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:56.373738  120517 main.go:141] libmachine: Using SSH client type: native
	I0316 00:06:56.373949  120517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:06:56.373986  120517 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:06:56.492359  120517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:06:56.492386  120517 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:06:56.492413  120517 buildroot.go:174] setting up certificates
	I0316 00:06:56.492424  120517 provision.go:84] configureAuth start
	I0316 00:06:56.492438  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:06:56.492695  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:06:56.495435  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.495748  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.495777  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.495886  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.497864  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.498188  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.498230  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.498358  120517 provision.go:143] copyHostCerts
	I0316 00:06:56.498431  120517 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:06:56.498444  120517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:06:56.498503  120517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:06:56.498622  120517 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:06:56.498634  120517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:06:56.498669  120517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:06:56.498748  120517 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:06:56.498757  120517 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:06:56.498805  120517 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:06:56.498873  120517 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:06:56.611091  120517 provision.go:177] copyRemoteCerts
	I0316 00:06:56.611188  120517 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:06:56.611225  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.613825  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.614133  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.614167  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.614319  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:56.614498  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.614660  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:56.614785  120517 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:06:56.697870  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:06:56.724012  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:06:56.748937  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:06:56.773597  120517 provision.go:87] duration metric: took 281.158579ms to configureAuth
	I0316 00:06:56.773648  120517 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:06:56.773867  120517 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:06:56.773980  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:56.776614  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.776923  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:56.776960  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:56.777059  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:56.777283  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.777463  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:56.777593  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:56.777800  120517 main.go:141] libmachine: Using SSH client type: native
	I0316 00:06:56.778009  120517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:06:56.778030  120517 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:06:57.059821  120517 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:06:57.059851  120517 main.go:141] libmachine: Checking connection to Docker...
	I0316 00:06:57.059863  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetURL
	I0316 00:06:57.061133  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using libvirt version 6000000
	I0316 00:06:57.063218  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.063593  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.063629  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.063764  120517 main.go:141] libmachine: Docker is up and running!
	I0316 00:06:57.063779  120517 main.go:141] libmachine: Reticulating splines...
	I0316 00:06:57.063786  120517 client.go:171] duration metric: took 24.742934382s to LocalClient.Create
	I0316 00:06:57.063810  120517 start.go:167] duration metric: took 24.743000512s to libmachine.API.Create "old-k8s-version-402923"
	I0316 00:06:57.063819  120517 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:06:57.063832  120517 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:06:57.063850  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:57.064096  120517 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:06:57.064122  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:57.066266  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.066556  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.066593  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.066781  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:57.066974  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:57.067148  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:57.067272  120517 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:06:57.150934  120517 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:06:57.155767  120517 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:06:57.155791  120517 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:06:57.155863  120517 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:06:57.155967  120517 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:06:57.156095  120517 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:06:57.166526  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:06:57.191084  120517 start.go:296] duration metric: took 127.245599ms for postStartSetup
	I0316 00:06:57.191148  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:06:57.191742  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:06:57.194944  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.195304  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.195350  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.195595  120517 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:06:57.195800  120517 start.go:128] duration metric: took 24.899028853s to createHost
	I0316 00:06:57.195829  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:57.197849  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.198094  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.198126  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.198255  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:57.198430  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:57.198588  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:57.198718  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:57.198872  120517 main.go:141] libmachine: Using SSH client type: native
	I0316 00:06:57.199041  120517 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:06:57.199054  120517 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0316 00:06:57.308017  120517 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710547617.291374497
	
	I0316 00:06:57.308050  120517 fix.go:216] guest clock: 1710547617.291374497
	I0316 00:06:57.308062  120517 fix.go:229] Guest: 2024-03-16 00:06:57.291374497 +0000 UTC Remote: 2024-03-16 00:06:57.195812305 +0000 UTC m=+38.097389487 (delta=95.562192ms)
	I0316 00:06:57.308095  120517 fix.go:200] guest clock delta is within tolerance: 95.562192ms
	I0316 00:06:57.308108  120517 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 25.01154557s
	I0316 00:06:57.308147  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:57.308422  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:06:57.311302  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.311661  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.311703  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.311786  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:57.312282  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:57.312487  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:06:57.312596  120517 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:06:57.312648  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:57.312778  120517 ssh_runner.go:195] Run: cat /version.json
	I0316 00:06:57.312802  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:06:57.315548  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.315744  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.315926  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.315950  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.316198  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:57.316212  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:57.316247  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:57.316401  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:57.316403  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:06:57.316562  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:06:57.316595  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:57.316704  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:06:57.316711  120517 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:06:57.316796  120517 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:06:57.396622  120517 ssh_runner.go:195] Run: systemctl --version
	I0316 00:06:57.424548  120517 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:06:57.585850  120517 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:06:57.592074  120517 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:06:57.592136  120517 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:06:57.609885  120517 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:06:57.609910  120517 start.go:494] detecting cgroup driver to use...
	I0316 00:06:57.609982  120517 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:06:57.627831  120517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:06:57.642606  120517 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:06:57.642679  120517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:06:57.657565  120517 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:06:57.671720  120517 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:06:57.795568  120517 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:06:57.961206  120517 docker.go:233] disabling docker service ...
	I0316 00:06:57.961276  120517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:06:57.981228  120517 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:06:57.994830  120517 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:06:58.132050  120517 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:06:58.286594  120517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:06:58.302061  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:06:58.321442  120517 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:06:58.321502  120517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:06:58.332210  120517 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:06:58.332275  120517 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:06:58.342555  120517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:06:58.352746  120517 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:06:58.362817  120517 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:06:58.373497  120517 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:06:58.382955  120517 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:06:58.383012  120517 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:06:58.395450  120517 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:06:58.406626  120517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:06:58.545711  120517 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:06:58.710242  120517 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:06:58.710328  120517 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:06:58.716597  120517 start.go:562] Will wait 60s for crictl version
	I0316 00:06:58.716668  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:06:58.721655  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:06:58.762768  120517 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:06:58.762865  120517 ssh_runner.go:195] Run: crio --version
	I0316 00:06:58.794603  120517 ssh_runner.go:195] Run: crio --version
	I0316 00:06:58.828649  120517 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:06:58.830053  120517 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:06:58.833183  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:58.833574  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:06:48 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:06:58.833605  120517 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:06:58.833793  120517 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:06:58.838134  120517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:06:58.851566  120517 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:06:58.851705  120517 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:06:58.851775  120517 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:06:58.886934  120517 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:06:58.887009  120517 ssh_runner.go:195] Run: which lz4
	I0316 00:06:58.891613  120517 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0316 00:06:58.896325  120517 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:06:58.896356  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:07:00.769649  120517 crio.go:444] duration metric: took 1.878079896s to copy over tarball
	I0316 00:07:00.769723  120517 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:07:03.445589  120517 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.675828131s)
	I0316 00:07:03.445683  120517 crio.go:451] duration metric: took 2.67600398s to extract the tarball
	I0316 00:07:03.445711  120517 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:07:03.491165  120517 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:07:03.549799  120517 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:07:03.549849  120517 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:07:03.549961  120517 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:07:03.549963  120517 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:07:03.549963  120517 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:07:03.550034  120517 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:07:03.550078  120517 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:07:03.549994  120517 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:07:03.550020  120517 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:07:03.550156  120517 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:07:03.551699  120517 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:07:03.551713  120517 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:07:03.551712  120517 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:07:03.551717  120517 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:07:03.551734  120517 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:07:03.551751  120517 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:07:03.551758  120517 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:07:03.552046  120517 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:07:03.706372  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:07:03.706372  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:07:03.709472  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:07:03.710870  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:07:03.716088  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:07:03.721072  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:07:03.723460  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:07:03.878231  120517 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:07:03.878290  120517 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:07:03.878338  120517 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:07:03.878353  120517 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:07:03.878406  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.878422  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.878260  120517 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:07:03.878476  120517 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:07:03.878510  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.909672  120517 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:07:03.909695  120517 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:07:03.909728  120517 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:07:03.909739  120517 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:07:03.909752  120517 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:07:03.909780  120517 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:07:03.909783  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.909829  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.909783  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.915090  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:07:03.915108  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:07:03.915159  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:07:03.915171  120517 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:07:03.915232  120517 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:07:03.915271  120517 ssh_runner.go:195] Run: which crictl
	I0316 00:07:03.915455  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:07:03.921853  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:07:03.921952  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:07:04.060174  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:07:04.060193  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:07:04.060297  120517 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:07:04.060332  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:07:04.060369  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:07:04.060384  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:07:04.060417  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:07:04.095528  120517 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:07:04.186050  120517 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:07:04.408460  120517 cache_images.go:92] duration metric: took 858.583082ms to LoadCachedImages
	W0316 00:07:04.408561  120517 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0316 00:07:04.408579  120517 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:07:04.408722  120517 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:07:04.408823  120517 ssh_runner.go:195] Run: crio config
	I0316 00:07:04.461472  120517 cni.go:84] Creating CNI manager for ""
	I0316 00:07:04.461499  120517 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:07:04.461512  120517 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:07:04.461538  120517 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:07:04.461714  120517 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:07:04.461786  120517 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:07:04.473706  120517 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:07:04.473781  120517 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:07:04.485343  120517 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:07:04.504988  120517 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:07:04.524439  120517 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:07:04.543091  120517 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:07:04.547248  120517 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:07:04.560488  120517 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:07:04.708338  120517 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:07:04.730313  120517 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:07:04.730339  120517 certs.go:194] generating shared ca certs ...
	I0316 00:07:04.730360  120517 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:04.730513  120517 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:07:04.730586  120517 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:07:04.730602  120517 certs.go:256] generating profile certs ...
	I0316 00:07:04.730681  120517 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:07:04.730698  120517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt with IP's: []
	I0316 00:07:04.912721  120517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt ...
	I0316 00:07:04.912753  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: {Name:mka13d4a6845e10e1f9c346ea283018d773bec23 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:04.912959  120517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key ...
	I0316 00:07:04.912982  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key: {Name:mk59cf4521a43cce7d8681d2ed16c0f52e7ab703 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:04.913097  120517 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:07:04.913118  120517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt.467cf8c5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.107]
	I0316 00:07:05.032905  120517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt.467cf8c5 ...
	I0316 00:07:05.032938  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt.467cf8c5: {Name:mk8e9f2a094f09e042ef92a026cf82afcf543c8d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:05.033129  120517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5 ...
	I0316 00:07:05.033149  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5: {Name:mk2cccd707b7dd3d13edbd0254dc1646a5f0806c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:05.033254  120517 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt.467cf8c5 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt
	I0316 00:07:05.033393  120517 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key
	I0316 00:07:05.033483  120517 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:07:05.033508  120517 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt with IP's: []
	I0316 00:07:05.334313  120517 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt ...
	I0316 00:07:05.334356  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt: {Name:mkabaf8077097d186fec4dcdca86a4d065b1d8de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:05.334556  120517 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key ...
	I0316 00:07:05.334578  120517 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key: {Name:mke6ab3919686aa50015135154ecaf2921eda38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:07:05.334793  120517 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:07:05.334850  120517 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:07:05.334868  120517 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:07:05.334920  120517 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:07:05.334957  120517 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:07:05.334995  120517 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:07:05.335069  120517 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:07:05.335736  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:07:05.363739  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:07:05.394805  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:07:05.421478  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:07:05.447264  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:07:05.474643  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:07:05.505988  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:07:05.535136  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:07:05.568662  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:07:05.600190  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:07:05.636556  120517 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:07:05.662725  120517 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:07:05.680978  120517 ssh_runner.go:195] Run: openssl version
	I0316 00:07:05.687220  120517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:07:05.699501  120517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:07:05.705959  120517 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:07:05.706014  120517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:07:05.714082  120517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:07:05.730581  120517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:07:05.746175  120517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:07:05.750934  120517 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:07:05.751007  120517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:07:05.757353  120517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:07:05.769224  120517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:07:05.781101  120517 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:07:05.786363  120517 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:07:05.786437  120517 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:07:05.792562  120517 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:07:05.804437  120517 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:07:05.808794  120517 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 00:07:05.808860  120517 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:07:05.808962  120517 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:07:05.809034  120517 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:07:05.846481  120517 cri.go:89] found id: ""
	I0316 00:07:05.846570  120517 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 00:07:05.857715  120517 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:07:05.869291  120517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:07:05.881144  120517 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:07:05.881163  120517 kubeadm.go:156] found existing configuration files:
	
	I0316 00:07:05.881215  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:07:05.894418  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:07:05.894504  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:07:05.908057  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:07:05.918131  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:07:05.918195  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:07:05.928387  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:07:05.938227  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:07:05.938289  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:07:05.948981  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:07:05.961399  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:07:05.961466  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:07:05.973877  120517 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:07:06.085973  120517 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:07:06.086061  120517 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:07:06.233617  120517 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:07:06.233798  120517 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:07:06.233960  120517 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:07:06.476813  120517 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:07:06.563943  120517 out.go:204]   - Generating certificates and keys ...
	I0316 00:07:06.564115  120517 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:07:06.564209  120517 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:07:06.768145  120517 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 00:07:07.008494  120517 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 00:07:07.472262  120517 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 00:07:07.706268  120517 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 00:07:07.849566  120517 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 00:07:07.849757  120517 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-402923] and IPs [192.168.39.107 127.0.0.1 ::1]
	I0316 00:07:07.939571  120517 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 00:07:07.939800  120517 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-402923] and IPs [192.168.39.107 127.0.0.1 ::1]
	I0316 00:07:08.002185  120517 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 00:07:08.194726  120517 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 00:07:08.351471  120517 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 00:07:08.351777  120517 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:07:08.442602  120517 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:07:08.621649  120517 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:07:09.264747  120517 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:07:09.385315  120517 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:07:09.408688  120517 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:07:09.410381  120517 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:07:09.410460  120517 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:07:09.558764  120517 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:07:09.560696  120517 out.go:204]   - Booting up control plane ...
	I0316 00:07:09.560842  120517 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:07:09.583267  120517 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:07:09.585601  120517 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:07:09.589699  120517 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:07:09.592299  120517 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:07:49.592795  120517 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:07:49.594074  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:07:49.594287  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:07:54.594939  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:07:54.595194  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:08:04.596571  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:08:04.596821  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:08:24.598739  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:08:24.598998  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:09:04.597304  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:09:04.597637  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:09:04.597663  120517 kubeadm.go:309] 
	I0316 00:09:04.597727  120517 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:09:04.597792  120517 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:09:04.597805  120517 kubeadm.go:309] 
	I0316 00:09:04.597848  120517 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:09:04.597914  120517 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:09:04.598118  120517 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:09:04.598142  120517 kubeadm.go:309] 
	I0316 00:09:04.598281  120517 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:09:04.598324  120517 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:09:04.598370  120517 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:09:04.598379  120517 kubeadm.go:309] 
	I0316 00:09:04.598537  120517 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:09:04.598658  120517 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:09:04.598679  120517 kubeadm.go:309] 
	I0316 00:09:04.598815  120517 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:09:04.598962  120517 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:09:04.599058  120517 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:09:04.599120  120517 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:09:04.599129  120517 kubeadm.go:309] 
	I0316 00:09:04.599812  120517 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:09:04.599920  120517 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:09:04.600001  120517 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:09:04.600186  120517 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-402923] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-402923] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-402923] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-402923] and IPs [192.168.39.107 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:09:04.600236  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:09:06.278429  120517 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.678155291s)
	I0316 00:09:06.278524  120517 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:09:06.294404  120517 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:09:06.308839  120517 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:09:06.308860  120517 kubeadm.go:156] found existing configuration files:
	
	I0316 00:09:06.308925  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:09:06.322086  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:09:06.322174  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:09:06.335843  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:09:06.345945  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:09:06.346016  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:09:06.356543  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:09:06.365899  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:09:06.365972  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:09:06.377329  120517 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:09:06.388721  120517 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:09:06.388784  120517 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:09:06.402961  120517 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:09:06.494323  120517 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:09:06.494423  120517 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:09:06.675818  120517 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:09:06.675963  120517 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:09:06.676087  120517 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:09:06.865422  120517 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:09:06.867369  120517 out.go:204]   - Generating certificates and keys ...
	I0316 00:09:06.867475  120517 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:09:06.867561  120517 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:09:06.867683  120517 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:09:06.867749  120517 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:09:06.867815  120517 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:09:06.867863  120517 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:09:06.867916  120517 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:09:06.867971  120517 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:09:06.868208  120517 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:09:06.869273  120517 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:09:06.869543  120517 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:09:06.869708  120517 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:09:07.103048  120517 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:09:07.169367  120517 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:09:07.417342  120517 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:09:07.642561  120517 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:09:07.658220  120517 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:09:07.660279  120517 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:09:07.660348  120517 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:09:07.822343  120517 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:09:07.824237  120517 out.go:204]   - Booting up control plane ...
	I0316 00:09:07.824372  120517 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:09:07.824479  120517 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:09:07.826099  120517 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:09:07.826804  120517 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:09:07.829660  120517 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:09:47.831788  120517 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:09:47.832327  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:09:47.832569  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:09:52.833226  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:09:52.833468  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:10:02.834489  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:10:02.834680  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:10:22.835868  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:10:22.836118  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:11:02.835582  120517 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:11:02.835755  120517 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:11:02.835764  120517 kubeadm.go:309] 
	I0316 00:11:02.835813  120517 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:11:02.835848  120517 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:11:02.835854  120517 kubeadm.go:309] 
	I0316 00:11:02.835887  120517 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:11:02.835942  120517 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:11:02.836045  120517 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:11:02.836054  120517 kubeadm.go:309] 
	I0316 00:11:02.836180  120517 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:11:02.836224  120517 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:11:02.836251  120517 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:11:02.836258  120517 kubeadm.go:309] 
	I0316 00:11:02.836342  120517 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:11:02.836406  120517 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:11:02.836412  120517 kubeadm.go:309] 
	I0316 00:11:02.836510  120517 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:11:02.836579  120517 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:11:02.836647  120517 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:11:02.836714  120517 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:11:02.836733  120517 kubeadm.go:309] 
	I0316 00:11:02.838236  120517 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:11:02.838321  120517 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:11:02.838396  120517 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:11:02.838467  120517 kubeadm.go:393] duration metric: took 3m57.029616306s to StartCluster
	I0316 00:11:02.838531  120517 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:11:02.838588  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:11:02.892005  120517 cri.go:89] found id: ""
	I0316 00:11:02.892038  120517 logs.go:276] 0 containers: []
	W0316 00:11:02.892047  120517 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:11:02.892055  120517 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:11:02.892128  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:11:02.937526  120517 cri.go:89] found id: ""
	I0316 00:11:02.937559  120517 logs.go:276] 0 containers: []
	W0316 00:11:02.937580  120517 logs.go:278] No container was found matching "etcd"
	I0316 00:11:02.937604  120517 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:11:02.937679  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:11:02.975229  120517 cri.go:89] found id: ""
	I0316 00:11:02.975257  120517 logs.go:276] 0 containers: []
	W0316 00:11:02.975266  120517 logs.go:278] No container was found matching "coredns"
	I0316 00:11:02.975274  120517 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:11:02.975354  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:11:03.011531  120517 cri.go:89] found id: ""
	I0316 00:11:03.011562  120517 logs.go:276] 0 containers: []
	W0316 00:11:03.011579  120517 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:11:03.011585  120517 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:11:03.011635  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:11:03.048521  120517 cri.go:89] found id: ""
	I0316 00:11:03.048556  120517 logs.go:276] 0 containers: []
	W0316 00:11:03.048568  120517 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:11:03.048575  120517 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:11:03.048642  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:11:03.086335  120517 cri.go:89] found id: ""
	I0316 00:11:03.086368  120517 logs.go:276] 0 containers: []
	W0316 00:11:03.086378  120517 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:11:03.086385  120517 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:11:03.086446  120517 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:11:03.122965  120517 cri.go:89] found id: ""
	I0316 00:11:03.123002  120517 logs.go:276] 0 containers: []
	W0316 00:11:03.123014  120517 logs.go:278] No container was found matching "kindnet"
	I0316 00:11:03.123028  120517 logs.go:123] Gathering logs for dmesg ...
	I0316 00:11:03.123053  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:11:03.136623  120517 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:11:03.136658  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:11:03.247462  120517 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:11:03.247487  120517 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:11:03.247502  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:11:03.344197  120517 logs.go:123] Gathering logs for container status ...
	I0316 00:11:03.344240  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:11:03.390367  120517 logs.go:123] Gathering logs for kubelet ...
	I0316 00:11:03.390428  120517 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0316 00:11:03.444584  120517 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:11:03.444639  120517 out.go:239] * 
	* 
	W0316 00:11:03.444707  120517 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:11:03.444737  120517 out.go:239] * 
	* 
	W0316 00:11:03.445583  120517 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:11:03.448926  120517 out.go:177] 
	W0316 00:11:03.450803  120517 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:11:03.450843  120517 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:11:03.450865  120517 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:11:03.452357  120517 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-402923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 6 (251.451974ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:03.749393  123046 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-402923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (284.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (141.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-238598 --alsologtostderr -v=3
E0316 00:09:08.402132   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-238598 --alsologtostderr -v=3: exit status 82 (2m3.479343076s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-238598"  ...
	* Stopping node "no-preload-238598"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:09:06.231604  122437 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:09:06.231812  122437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:09:06.231825  122437 out.go:304] Setting ErrFile to fd 2...
	I0316 00:09:06.231830  122437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:09:06.232561  122437 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:09:06.233079  122437 out.go:298] Setting JSON to false
	I0316 00:09:06.233219  122437 mustload.go:65] Loading cluster: no-preload-238598
	I0316 00:09:06.234150  122437 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:09:06.234325  122437 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:09:06.234556  122437 mustload.go:65] Loading cluster: no-preload-238598
	I0316 00:09:06.235230  122437 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:09:06.235287  122437 stop.go:39] StopHost: no-preload-238598
	I0316 00:09:06.235901  122437 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:09:06.235952  122437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:09:06.251102  122437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0316 00:09:06.251693  122437 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:09:06.252383  122437 main.go:141] libmachine: Using API Version  1
	I0316 00:09:06.252418  122437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:09:06.252762  122437 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:09:06.255530  122437 out.go:177] * Stopping node "no-preload-238598"  ...
	I0316 00:09:06.257131  122437 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0316 00:09:06.257183  122437 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:09:06.257554  122437 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0316 00:09:06.257593  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:09:06.261896  122437 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:09:06.262398  122437 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:07:13 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:09:06.262445  122437 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:09:06.262609  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:09:06.262808  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:09:06.262988  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:09:06.263158  122437 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:09:06.379616  122437 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0316 00:09:06.447425  122437 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0316 00:09:06.513282  122437 main.go:141] libmachine: Stopping "no-preload-238598"...
	I0316 00:09:06.513316  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:09:06.514841  122437 main.go:141] libmachine: (no-preload-238598) Calling .Stop
	I0316 00:09:06.518402  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 0/60
	I0316 00:09:07.519741  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 1/60
	I0316 00:09:08.520958  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 2/60
	I0316 00:09:09.522549  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 3/60
	I0316 00:09:10.523938  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 4/60
	I0316 00:09:11.526122  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 5/60
	I0316 00:09:12.528310  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 6/60
	I0316 00:09:13.529790  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 7/60
	I0316 00:09:14.531490  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 8/60
	I0316 00:09:15.534189  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 9/60
	I0316 00:09:16.535699  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 10/60
	I0316 00:09:17.537910  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 11/60
	I0316 00:09:18.539314  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 12/60
	I0316 00:09:19.540728  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 13/60
	I0316 00:09:20.542214  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 14/60
	I0316 00:09:21.544241  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 15/60
	I0316 00:09:22.545735  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 16/60
	I0316 00:09:23.547031  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 17/60
	I0316 00:09:24.548709  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 18/60
	I0316 00:09:25.550249  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 19/60
	I0316 00:09:26.552823  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 20/60
	I0316 00:09:27.554139  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 21/60
	I0316 00:09:28.555698  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 22/60
	I0316 00:09:29.557747  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 23/60
	I0316 00:09:30.559046  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 24/60
	I0316 00:09:31.560786  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 25/60
	I0316 00:09:32.561997  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 26/60
	I0316 00:09:33.563906  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 27/60
	I0316 00:09:34.565179  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 28/60
	I0316 00:09:35.566507  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 29/60
	I0316 00:09:36.568635  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 30/60
	I0316 00:09:37.570010  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 31/60
	I0316 00:09:38.571563  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 32/60
	I0316 00:09:39.573764  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 33/60
	I0316 00:09:40.575195  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 34/60
	I0316 00:09:41.577320  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 35/60
	I0316 00:09:42.579112  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 36/60
	I0316 00:09:43.580602  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 37/60
	I0316 00:09:44.581853  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 38/60
	I0316 00:09:45.583103  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 39/60
	I0316 00:09:46.585199  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 40/60
	I0316 00:09:47.586678  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 41/60
	I0316 00:09:48.588004  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 42/60
	I0316 00:09:49.589298  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 43/60
	I0316 00:09:50.590649  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 44/60
	I0316 00:09:51.592543  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 45/60
	I0316 00:09:52.594075  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 46/60
	I0316 00:09:53.595581  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 47/60
	I0316 00:09:54.597666  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 48/60
	I0316 00:09:55.599025  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 49/60
	I0316 00:09:56.601194  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 50/60
	I0316 00:09:57.602279  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 51/60
	I0316 00:09:58.603485  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 52/60
	I0316 00:09:59.604543  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 53/60
	I0316 00:10:00.605673  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 54/60
	I0316 00:10:01.607391  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 55/60
	I0316 00:10:02.608330  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 56/60
	I0316 00:10:03.609415  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 57/60
	I0316 00:10:04.610509  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 58/60
	I0316 00:10:05.611836  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 59/60
	I0316 00:10:06.612816  122437 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0316 00:10:06.612862  122437 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:10:06.612883  122437 retry.go:31] will retry after 1.189295856s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:10:07.803196  122437 stop.go:39] StopHost: no-preload-238598
	I0316 00:10:07.803639  122437 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:10:07.803697  122437 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:10:07.817743  122437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34973
	I0316 00:10:07.818242  122437 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:10:07.818726  122437 main.go:141] libmachine: Using API Version  1
	I0316 00:10:07.818748  122437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:10:07.819097  122437 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:10:07.820883  122437 out.go:177] * Stopping node "no-preload-238598"  ...
	I0316 00:10:07.821997  122437 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0316 00:10:07.822017  122437 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:10:07.822230  122437 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0316 00:10:07.822253  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:10:07.825229  122437 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:10:07.825682  122437 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:07:13 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:10:07.825715  122437 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:10:07.825872  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:10:07.826078  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:10:07.826250  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:10:07.826369  122437 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	W0316 00:10:07.828611  122437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.50.137:22: connect: connection refused
	I0316 00:10:07.828654  122437 retry.go:31] will retry after 195.451742ms: dial tcp 192.168.50.137:22: connect: connection refused
	W0316 00:10:08.025517  122437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.50.137:22: connect: connection refused
	I0316 00:10:08.025563  122437 retry.go:31] will retry after 245.291128ms: dial tcp 192.168.50.137:22: connect: connection refused
	W0316 00:10:08.271425  122437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.50.137:22: connect: connection refused
	I0316 00:10:08.271467  122437 retry.go:31] will retry after 793.842731ms: dial tcp 192.168.50.137:22: connect: connection refused
	W0316 00:10:09.065869  122437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.50.137:22: connect: connection refused
	I0316 00:10:09.065934  122437 retry.go:31] will retry after 480.440609ms: dial tcp 192.168.50.137:22: connect: connection refused
	W0316 00:10:09.546947  122437 sshutil.go:64] dial failure (will retry): dial tcp 192.168.50.137:22: connect: connection refused
	W0316 00:10:09.547062  122437 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: connection refused
	I0316 00:10:09.547090  122437 main.go:141] libmachine: Stopping "no-preload-238598"...
	I0316 00:10:09.547102  122437 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:10:09.548842  122437 main.go:141] libmachine: (no-preload-238598) Calling .Stop
	I0316 00:10:09.553037  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 0/60
	I0316 00:10:10.554515  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 1/60
	I0316 00:10:11.556206  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 2/60
	I0316 00:10:12.557648  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 3/60
	I0316 00:10:13.559570  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 4/60
	I0316 00:10:14.561836  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 5/60
	I0316 00:10:15.563314  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 6/60
	I0316 00:10:16.564844  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 7/60
	I0316 00:10:17.566179  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 8/60
	I0316 00:10:18.568078  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 9/60
	I0316 00:10:19.569421  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 10/60
	I0316 00:10:20.570746  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 11/60
	I0316 00:10:21.572176  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 12/60
	I0316 00:10:22.573459  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 13/60
	I0316 00:10:23.575161  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 14/60
	I0316 00:10:24.576557  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 15/60
	I0316 00:10:25.577912  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 16/60
	I0316 00:10:26.579258  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 17/60
	I0316 00:10:27.580651  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 18/60
	I0316 00:10:28.582605  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 19/60
	I0316 00:10:29.583902  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 20/60
	I0316 00:10:30.585334  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 21/60
	I0316 00:10:31.586798  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 22/60
	I0316 00:10:32.588203  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 23/60
	I0316 00:10:33.590014  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 24/60
	I0316 00:10:34.591485  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 25/60
	I0316 00:10:35.593078  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 26/60
	I0316 00:10:36.594605  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 27/60
	I0316 00:10:37.596262  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 28/60
	I0316 00:10:38.598143  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 29/60
	I0316 00:10:39.600204  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 30/60
	I0316 00:10:40.601656  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 31/60
	I0316 00:10:41.603011  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 32/60
	I0316 00:10:42.604487  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 33/60
	I0316 00:10:43.606188  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 34/60
	I0316 00:10:44.607677  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 35/60
	I0316 00:10:45.609369  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 36/60
	I0316 00:10:46.610597  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 37/60
	I0316 00:10:47.611891  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 38/60
	I0316 00:10:48.613576  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 39/60
	I0316 00:10:49.614929  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 40/60
	I0316 00:10:50.616413  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 41/60
	I0316 00:10:51.617822  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 42/60
	I0316 00:10:52.619149  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 43/60
	I0316 00:10:53.620615  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 44/60
	I0316 00:10:54.622149  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 45/60
	I0316 00:10:55.623503  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 46/60
	I0316 00:10:56.624973  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 47/60
	I0316 00:10:57.626482  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 48/60
	I0316 00:10:58.628019  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 49/60
	I0316 00:10:59.629774  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 50/60
	I0316 00:11:00.631296  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 51/60
	I0316 00:11:01.632708  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 52/60
	I0316 00:11:02.634396  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 53/60
	I0316 00:11:03.636461  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 54/60
	I0316 00:11:04.638114  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 55/60
	I0316 00:11:05.639703  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 56/60
	I0316 00:11:06.641147  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 57/60
	I0316 00:11:07.642777  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 58/60
	I0316 00:11:08.644273  122437 main.go:141] libmachine: (no-preload-238598) Waiting for machine to stop 59/60
	I0316 00:11:09.644737  122437 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0316 00:11:09.644789  122437 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:11:09.646671  122437 out.go:177] 
	W0316 00:11:09.647955  122437 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0316 00:11:09.647974  122437 out.go:239] * 
	* 
	W0316 00:11:09.651344  122437 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:11:09.652655  122437 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-238598 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598: exit status 3 (18.468684326s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:28.123724  123187 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	E0316 00:11:28.123749  123187 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-238598" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (141.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (142.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-666637 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-666637 --alsologtostderr -v=3: exit status 82 (2m3.982674652s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-666637"  ...
	* Stopping node "embed-certs-666637"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:09:12.746981  122542 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:09:12.747138  122542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:09:12.747146  122542 out.go:304] Setting ErrFile to fd 2...
	I0316 00:09:12.747152  122542 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:09:12.747601  122542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:09:12.748336  122542 out.go:298] Setting JSON to false
	I0316 00:09:12.748447  122542 mustload.go:65] Loading cluster: embed-certs-666637
	I0316 00:09:12.748802  122542 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:09:12.748867  122542 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:09:12.749030  122542 mustload.go:65] Loading cluster: embed-certs-666637
	I0316 00:09:12.749123  122542 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:09:12.749147  122542 stop.go:39] StopHost: embed-certs-666637
	I0316 00:09:12.749563  122542 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:09:12.749608  122542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:09:12.764431  122542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34907
	I0316 00:09:12.764875  122542 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:09:12.765382  122542 main.go:141] libmachine: Using API Version  1
	I0316 00:09:12.765408  122542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:09:12.765753  122542 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:09:12.768196  122542 out.go:177] * Stopping node "embed-certs-666637"  ...
	I0316 00:09:12.769459  122542 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0316 00:09:12.769490  122542 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:09:12.769724  122542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0316 00:09:12.769751  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:09:12.772326  122542 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:09:12.772652  122542 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:07:39 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:09:12.772686  122542 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:09:12.772847  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:09:12.773017  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:09:12.773172  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:09:12.773306  122542 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:09:12.901853  122542 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0316 00:09:12.957490  122542 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0316 00:09:13.018108  122542 main.go:141] libmachine: Stopping "embed-certs-666637"...
	I0316 00:09:13.018148  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:09:13.019783  122542 main.go:141] libmachine: (embed-certs-666637) Calling .Stop
	I0316 00:09:13.023218  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 0/60
	I0316 00:09:14.024736  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 1/60
	I0316 00:09:15.026369  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 2/60
	I0316 00:09:16.027885  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 3/60
	I0316 00:09:17.029202  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 4/60
	I0316 00:09:18.031346  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 5/60
	I0316 00:09:19.032749  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 6/60
	I0316 00:09:20.034086  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 7/60
	I0316 00:09:21.035461  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 8/60
	I0316 00:09:22.036817  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 9/60
	I0316 00:09:23.038290  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 10/60
	I0316 00:09:24.040143  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 11/60
	I0316 00:09:25.042467  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 12/60
	I0316 00:09:26.044026  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 13/60
	I0316 00:09:27.045982  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 14/60
	I0316 00:09:28.047912  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 15/60
	I0316 00:09:29.049601  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 16/60
	I0316 00:09:30.051138  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 17/60
	I0316 00:09:31.052564  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 18/60
	I0316 00:09:32.054936  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 19/60
	I0316 00:09:33.056998  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 20/60
	I0316 00:09:34.058548  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 21/60
	I0316 00:09:35.059962  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 22/60
	I0316 00:09:36.061305  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 23/60
	I0316 00:09:37.062461  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 24/60
	I0316 00:09:38.064416  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 25/60
	I0316 00:09:39.065840  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 26/60
	I0316 00:09:40.067292  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 27/60
	I0316 00:09:41.068589  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 28/60
	I0316 00:09:42.070221  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 29/60
	I0316 00:09:43.072449  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 30/60
	I0316 00:09:44.074011  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 31/60
	I0316 00:09:45.075405  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 32/60
	I0316 00:09:46.076806  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 33/60
	I0316 00:09:47.078287  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 34/60
	I0316 00:09:48.080416  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 35/60
	I0316 00:09:49.081845  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 36/60
	I0316 00:09:50.083388  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 37/60
	I0316 00:09:51.084741  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 38/60
	I0316 00:09:52.086101  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 39/60
	I0316 00:09:53.088222  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 40/60
	I0316 00:09:54.089720  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 41/60
	I0316 00:09:55.091258  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 42/60
	I0316 00:09:56.092741  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 43/60
	I0316 00:09:57.094344  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 44/60
	I0316 00:09:58.096282  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 45/60
	I0316 00:09:59.097853  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 46/60
	I0316 00:10:00.100398  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 47/60
	I0316 00:10:01.101982  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 48/60
	I0316 00:10:02.103409  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 49/60
	I0316 00:10:03.105825  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 50/60
	I0316 00:10:04.107400  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 51/60
	I0316 00:10:05.108821  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 52/60
	I0316 00:10:06.110363  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 53/60
	I0316 00:10:07.112117  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 54/60
	I0316 00:10:08.114494  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 55/60
	I0316 00:10:09.116113  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 56/60
	I0316 00:10:10.117649  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 57/60
	I0316 00:10:11.119201  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 58/60
	I0316 00:10:12.120660  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 59/60
	I0316 00:10:13.122167  122542 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0316 00:10:13.122231  122542 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:10:13.122250  122542 retry.go:31] will retry after 1.478092154s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:10:14.601853  122542 stop.go:39] StopHost: embed-certs-666637
	I0316 00:10:14.602235  122542 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:10:14.602295  122542 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:10:14.616557  122542 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39321
	I0316 00:10:14.617034  122542 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:10:14.617623  122542 main.go:141] libmachine: Using API Version  1
	I0316 00:10:14.617656  122542 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:10:14.618004  122542 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:10:14.620162  122542 out.go:177] * Stopping node "embed-certs-666637"  ...
	I0316 00:10:14.621625  122542 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0316 00:10:14.621652  122542 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:10:14.621922  122542 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0316 00:10:14.621958  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:10:14.624832  122542 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:10:14.625368  122542 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:07:39 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:10:14.625400  122542 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:10:14.625538  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:10:14.625720  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:10:14.625889  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:10:14.626207  122542 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	W0316 00:10:14.626910  122542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.61.91:22: connect: connection refused
	I0316 00:10:14.626952  122542 retry.go:31] will retry after 291.208267ms: dial tcp 192.168.61.91:22: connect: connection refused
	W0316 00:10:14.918835  122542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.61.91:22: connect: connection refused
	I0316 00:10:14.918890  122542 retry.go:31] will retry after 304.622209ms: dial tcp 192.168.61.91:22: connect: connection refused
	W0316 00:10:15.224888  122542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.61.91:22: connect: connection refused
	I0316 00:10:15.224932  122542 retry.go:31] will retry after 352.760014ms: dial tcp 192.168.61.91:22: connect: connection refused
	W0316 00:10:15.578891  122542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.61.91:22: connect: connection refused
	I0316 00:10:15.578944  122542 retry.go:31] will retry after 998.783522ms: dial tcp 192.168.61.91:22: connect: connection refused
	W0316 00:10:16.578446  122542 sshutil.go:64] dial failure (will retry): dial tcp 192.168.61.91:22: connect: connection refused
	W0316 00:10:16.578565  122542 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: connection refused
	I0316 00:10:16.578598  122542 main.go:141] libmachine: Stopping "embed-certs-666637"...
	I0316 00:10:16.578615  122542 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:10:16.580110  122542 main.go:141] libmachine: (embed-certs-666637) Calling .Stop
	I0316 00:10:16.583556  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 0/60
	I0316 00:10:17.585614  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 1/60
	I0316 00:10:18.586659  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 2/60
	I0316 00:10:19.587957  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 3/60
	I0316 00:10:20.589686  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 4/60
	I0316 00:10:21.590984  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 5/60
	I0316 00:10:22.592112  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 6/60
	I0316 00:10:23.593216  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 7/60
	I0316 00:10:24.594607  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 8/60
	I0316 00:10:25.596412  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 9/60
	I0316 00:10:26.597694  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 10/60
	I0316 00:10:27.598899  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 11/60
	I0316 00:10:28.600550  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 12/60
	I0316 00:10:29.601774  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 13/60
	I0316 00:10:30.602997  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 14/60
	I0316 00:10:31.604225  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 15/60
	I0316 00:10:32.605527  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 16/60
	I0316 00:10:33.606693  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 17/60
	I0316 00:10:34.607801  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 18/60
	I0316 00:10:35.609434  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 19/60
	I0316 00:10:36.610604  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 20/60
	I0316 00:10:37.611969  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 21/60
	I0316 00:10:38.613294  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 22/60
	I0316 00:10:39.614654  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 23/60
	I0316 00:10:40.616291  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 24/60
	I0316 00:10:41.617666  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 25/60
	I0316 00:10:42.618966  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 26/60
	I0316 00:10:43.620439  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 27/60
	I0316 00:10:44.621504  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 28/60
	I0316 00:10:45.622851  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 29/60
	I0316 00:10:46.624229  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 30/60
	I0316 00:10:47.625483  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 31/60
	I0316 00:10:48.626767  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 32/60
	I0316 00:10:49.628015  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 33/60
	I0316 00:10:50.629704  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 34/60
	I0316 00:10:51.630933  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 35/60
	I0316 00:10:52.632145  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 36/60
	I0316 00:10:53.633560  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 37/60
	I0316 00:10:54.635295  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 38/60
	I0316 00:10:55.636507  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 39/60
	I0316 00:10:56.637744  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 40/60
	I0316 00:10:57.639075  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 41/60
	I0316 00:10:58.640254  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 42/60
	I0316 00:10:59.641957  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 43/60
	I0316 00:11:00.643371  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 44/60
	I0316 00:11:01.644587  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 45/60
	I0316 00:11:02.645863  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 46/60
	I0316 00:11:03.648170  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 47/60
	I0316 00:11:04.649725  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 48/60
	I0316 00:11:05.651162  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 49/60
	I0316 00:11:06.652366  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 50/60
	I0316 00:11:07.653455  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 51/60
	I0316 00:11:08.654514  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 52/60
	I0316 00:11:09.656164  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 53/60
	I0316 00:11:10.657547  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 54/60
	I0316 00:11:11.658961  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 55/60
	I0316 00:11:12.660371  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 56/60
	I0316 00:11:13.661700  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 57/60
	I0316 00:11:14.663410  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 58/60
	I0316 00:11:15.664738  122542 main.go:141] libmachine: (embed-certs-666637) Waiting for machine to stop 59/60
	I0316 00:11:16.665354  122542 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0316 00:11:16.665412  122542 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:11:16.667460  122542 out.go:177] 
	W0316 00:11:16.669004  122542 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0316 00:11:16.669022  122542 out.go:239] * 
	* 
	W0316 00:11:16.672348  122542 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:11:16.673736  122542 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-666637 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637: exit status 3 (18.615670278s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:35.291689  123241 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	E0316 00:11:35.291709  123241 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-666637" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (142.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (141.73s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-313436 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-313436 --alsologtostderr -v=3: exit status 82 (2m3.069377418s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-313436"  ...
	* Stopping node "default-k8s-diff-port-313436"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:09:56.367042  122806 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:09:56.367174  122806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:09:56.367183  122806 out.go:304] Setting ErrFile to fd 2...
	I0316 00:09:56.367186  122806 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:09:56.367398  122806 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:09:56.367613  122806 out.go:298] Setting JSON to false
	I0316 00:09:56.367691  122806 mustload.go:65] Loading cluster: default-k8s-diff-port-313436
	I0316 00:09:56.368018  122806 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:09:56.368082  122806 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:09:56.368242  122806 mustload.go:65] Loading cluster: default-k8s-diff-port-313436
	I0316 00:09:56.368334  122806 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:09:56.368365  122806 stop.go:39] StopHost: default-k8s-diff-port-313436
	I0316 00:09:56.368689  122806 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:09:56.368727  122806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:09:56.382779  122806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38311
	I0316 00:09:56.383274  122806 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:09:56.383939  122806 main.go:141] libmachine: Using API Version  1
	I0316 00:09:56.383967  122806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:09:56.384383  122806 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:09:56.386572  122806 out.go:177] * Stopping node "default-k8s-diff-port-313436"  ...
	I0316 00:09:56.388357  122806 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0316 00:09:56.388382  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:09:56.388621  122806 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0316 00:09:56.388648  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:09:56.391363  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:09:56.391771  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:09:56.391797  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:09:56.391931  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:09:56.392104  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:09:56.392241  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:09:56.392392  122806 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:09:56.481949  122806 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0316 00:09:56.525148  122806 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0316 00:09:56.583992  122806 main.go:141] libmachine: Stopping "default-k8s-diff-port-313436"...
	I0316 00:09:56.584042  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:09:56.585726  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Stop
	I0316 00:09:56.589705  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 0/60
	I0316 00:09:57.590910  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 1/60
	I0316 00:09:58.592289  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 2/60
	I0316 00:09:59.593829  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 3/60
	I0316 00:10:00.595148  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 4/60
	I0316 00:10:01.597321  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 5/60
	I0316 00:10:02.598746  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 6/60
	I0316 00:10:03.600338  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 7/60
	I0316 00:10:04.601811  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 8/60
	I0316 00:10:05.603456  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 9/60
	I0316 00:10:06.605751  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 10/60
	I0316 00:10:07.607191  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 11/60
	I0316 00:10:08.608931  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 12/60
	I0316 00:10:09.610426  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 13/60
	I0316 00:10:10.611918  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 14/60
	I0316 00:10:11.613811  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 15/60
	I0316 00:10:12.615260  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 16/60
	I0316 00:10:13.616648  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 17/60
	I0316 00:10:14.618030  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 18/60
	I0316 00:10:15.619514  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 19/60
	I0316 00:10:16.621751  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 20/60
	I0316 00:10:17.623744  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 21/60
	I0316 00:10:18.625070  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 22/60
	I0316 00:10:19.626236  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 23/60
	I0316 00:10:20.627458  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 24/60
	I0316 00:10:21.629346  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 25/60
	I0316 00:10:22.630464  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 26/60
	I0316 00:10:23.631733  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 27/60
	I0316 00:10:24.633816  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 28/60
	I0316 00:10:25.635070  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 29/60
	I0316 00:10:26.637298  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 30/60
	I0316 00:10:27.638567  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 31/60
	I0316 00:10:28.640067  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 32/60
	I0316 00:10:29.641338  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 33/60
	I0316 00:10:30.642793  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 34/60
	I0316 00:10:31.644831  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 35/60
	I0316 00:10:32.646389  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 36/60
	I0316 00:10:33.647871  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 37/60
	I0316 00:10:34.649481  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 38/60
	I0316 00:10:35.651130  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 39/60
	I0316 00:10:36.653153  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 40/60
	I0316 00:10:37.655435  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 41/60
	I0316 00:10:38.656975  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 42/60
	I0316 00:10:39.658333  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 43/60
	I0316 00:10:40.659682  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 44/60
	I0316 00:10:41.661659  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 45/60
	I0316 00:10:42.662954  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 46/60
	I0316 00:10:43.664583  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 47/60
	I0316 00:10:44.665912  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 48/60
	I0316 00:10:45.667364  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 49/60
	I0316 00:10:46.669457  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 50/60
	I0316 00:10:47.670881  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 51/60
	I0316 00:10:48.672180  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 52/60
	I0316 00:10:49.673581  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 53/60
	I0316 00:10:50.674864  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 54/60
	I0316 00:10:51.677125  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 55/60
	I0316 00:10:52.678494  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 56/60
	I0316 00:10:53.679819  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 57/60
	I0316 00:10:54.681303  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 58/60
	I0316 00:10:55.682532  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 59/60
	I0316 00:10:56.683966  122806 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0316 00:10:56.684022  122806 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:10:56.684040  122806 retry.go:31] will retry after 773.616447ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:10:57.457938  122806 stop.go:39] StopHost: default-k8s-diff-port-313436
	I0316 00:10:57.458359  122806 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:10:57.458415  122806 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:10:57.473087  122806 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32801
	I0316 00:10:57.473476  122806 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:10:57.473981  122806 main.go:141] libmachine: Using API Version  1
	I0316 00:10:57.474005  122806 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:10:57.474338  122806 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:10:57.476362  122806 out.go:177] * Stopping node "default-k8s-diff-port-313436"  ...
	I0316 00:10:57.477664  122806 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0316 00:10:57.477692  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:10:57.477934  122806 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0316 00:10:57.477957  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:10:57.481147  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:10:57.481528  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:10:57.481572  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:10:57.481727  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:10:57.481935  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:10:57.482092  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:10:57.482270  122806 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	W0316 00:10:57.482969  122806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.72.198:22: connect: connection refused
	I0316 00:10:57.483016  122806 retry.go:31] will retry after 170.480101ms: dial tcp 192.168.72.198:22: connect: connection refused
	W0316 00:10:57.654842  122806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.72.198:22: connect: connection refused
	I0316 00:10:57.654886  122806 retry.go:31] will retry after 331.967872ms: dial tcp 192.168.72.198:22: connect: connection refused
	W0316 00:10:57.987863  122806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.72.198:22: connect: connection refused
	I0316 00:10:57.987910  122806 retry.go:31] will retry after 700.31608ms: dial tcp 192.168.72.198:22: connect: connection refused
	W0316 00:10:58.689169  122806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.72.198:22: connect: connection refused
	I0316 00:10:58.689223  122806 retry.go:31] will retry after 580.438573ms: dial tcp 192.168.72.198:22: connect: connection refused
	W0316 00:10:59.270394  122806 sshutil.go:64] dial failure (will retry): dial tcp 192.168.72.198:22: connect: connection refused
	W0316 00:10:59.270491  122806 stop.go:55] failed to complete vm config backup (will continue): create dir: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: connection refused
	I0316 00:10:59.270515  122806 main.go:141] libmachine: Stopping "default-k8s-diff-port-313436"...
	I0316 00:10:59.270525  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:10:59.272218  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Stop
	I0316 00:10:59.275482  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 0/60
	I0316 00:11:00.277195  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 1/60
	I0316 00:11:01.278614  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 2/60
	I0316 00:11:02.279993  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 3/60
	I0316 00:11:03.281669  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 4/60
	I0316 00:11:04.283185  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 5/60
	I0316 00:11:05.284542  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 6/60
	I0316 00:11:06.285874  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 7/60
	I0316 00:11:07.287494  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 8/60
	I0316 00:11:08.289417  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 9/60
	I0316 00:11:09.290762  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 10/60
	I0316 00:11:10.292201  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 11/60
	I0316 00:11:11.293589  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 12/60
	I0316 00:11:12.295052  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 13/60
	I0316 00:11:13.296833  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 14/60
	I0316 00:11:14.298173  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 15/60
	I0316 00:11:15.299674  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 16/60
	I0316 00:11:16.302137  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 17/60
	I0316 00:11:17.303709  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 18/60
	I0316 00:11:18.305568  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 19/60
	I0316 00:11:19.306997  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 20/60
	I0316 00:11:20.308555  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 21/60
	I0316 00:11:21.309904  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 22/60
	I0316 00:11:22.311468  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 23/60
	I0316 00:11:23.313393  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 24/60
	I0316 00:11:24.315175  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 25/60
	I0316 00:11:25.316855  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 26/60
	I0316 00:11:26.318585  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 27/60
	I0316 00:11:27.320311  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 28/60
	I0316 00:11:28.321999  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 29/60
	I0316 00:11:29.323598  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 30/60
	I0316 00:11:30.324929  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 31/60
	I0316 00:11:31.326273  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 32/60
	I0316 00:11:32.327986  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 33/60
	I0316 00:11:33.329518  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 34/60
	I0316 00:11:34.331219  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 35/60
	I0316 00:11:35.332569  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 36/60
	I0316 00:11:36.334288  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 37/60
	I0316 00:11:37.335883  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 38/60
	I0316 00:11:38.337539  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 39/60
	I0316 00:11:39.339280  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 40/60
	I0316 00:11:40.340832  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 41/60
	I0316 00:11:41.342279  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 42/60
	I0316 00:11:42.344024  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 43/60
	I0316 00:11:43.346070  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 44/60
	I0316 00:11:44.347540  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 45/60
	I0316 00:11:45.349314  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 46/60
	I0316 00:11:46.350689  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 47/60
	I0316 00:11:47.352290  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 48/60
	I0316 00:11:48.354104  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 49/60
	I0316 00:11:49.355466  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 50/60
	I0316 00:11:50.357175  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 51/60
	I0316 00:11:51.358613  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 52/60
	I0316 00:11:52.360198  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 53/60
	I0316 00:11:53.362091  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 54/60
	I0316 00:11:54.363744  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 55/60
	I0316 00:11:55.365918  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 56/60
	I0316 00:11:56.367569  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 57/60
	I0316 00:11:57.368651  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 58/60
	I0316 00:11:58.370563  122806 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for machine to stop 59/60
	I0316 00:11:59.371109  122806 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0316 00:11:59.371169  122806 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0316 00:11:59.373383  122806 out.go:177] 
	W0316 00:11:59.374783  122806 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0316 00:11:59.374808  122806 out.go:239] * 
	* 
	W0316 00:11:59.377987  122806 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:11:59.379301  122806 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-313436 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436: exit status 3 (18.662180052s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:12:18.043713  123593 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host
	E0316 00:12:18.043743  123593 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-313436" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (141.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-402923 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-402923 create -f testdata/busybox.yaml: exit status 1 (44.63087ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-402923" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-402923 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 6 (236.476321ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:04.032435  123085 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-402923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 6 (235.743168ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:04.267209  123115 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-402923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-402923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-402923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m56.035186846s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-402923 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-402923 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-402923 describe deploy/metrics-server -n kube-system: exit status 1 (45.617843ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-402923" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-402923 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 6 (232.504977ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:13:00.580842  123957 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-402923" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (116.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598: exit status 3 (3.1673919s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:31.291751  123295 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	E0316 00:11:31.291778  123295 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-238598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-238598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153347279s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-238598 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598: exit status 3 (3.062299789s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:40.507704  123395 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host
	E0316 00:11:40.507734  123395 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.137:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-238598" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637: exit status 3 (3.167799154s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:38.459745  123365 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	E0316 00:11:38.459773  123365 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-666637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-666637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153599925s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-666637 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637: exit status 3 (3.062014219s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:11:47.675719  123496 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host
	E0316 00:11:47.675756  123496 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.91:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-666637" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436: exit status 3 (3.1677177s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:12:21.211690  123706 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host
	E0316 00:12:21.211706  123706 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-313436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-313436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.15329586s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-313436 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436: exit status 3 (3.062012828s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0316 00:12:30.427663  123777 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host
	E0316 00:12:30.427681  123777 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.198:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-313436" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (744.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-402923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E0316 00:13:58.905690   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0316 00:14:08.402858   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0316 00:15:21.952230   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0316 00:18:58.905837   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0316 00:19:08.402133   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0316 00:20:31.451928   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-402923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m21.038218552s)

                                                
                                                
-- stdout --
	* [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	* 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	* 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-402923 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (251.101951ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-402923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-402923 logs -n 25: (1.589730059s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-313368 ssh                                | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:13:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:00.891560  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:13:06.971548  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:10.043616  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:16.123615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:19.195641  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:25.275569  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:28.347627  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:34.427628  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:37.499621  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:43.579636  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:46.651611  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:52.731602  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:55.803555  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:01.883545  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:04.955579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:11.035610  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:14.107615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:20.187606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:23.259572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:29.339575  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:32.411617  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:38.491587  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:41.563659  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:47.643582  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:50.715565  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:56.795596  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:59.867614  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:05.947572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:09.019585  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:15.099606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:18.171563  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:24.251589  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:27.323592  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:33.403599  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:36.475652  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:42.555600  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:45.627577  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:51.707630  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:54.779625  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:00.859579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:03.931626  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:10.011762  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:13.083615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:16.087122  123537 start.go:364] duration metric: took 4m28.254030119s to acquireMachinesLock for "embed-certs-666637"
	I0316 00:16:16.087211  123537 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:16.087224  123537 fix.go:54] fixHost starting: 
	I0316 00:16:16.087613  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:16.087653  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:16.102371  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0316 00:16:16.102813  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:16.103305  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:16.103343  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:16.103693  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:16.103874  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:16.104010  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:16.105752  123537 fix.go:112] recreateIfNeeded on embed-certs-666637: state=Stopped err=<nil>
	I0316 00:16:16.105780  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	W0316 00:16:16.105959  123537 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:16.107881  123537 out.go:177] * Restarting existing kvm2 VM for "embed-certs-666637" ...
	I0316 00:16:16.109056  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Start
	I0316 00:16:16.109231  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring networks are active...
	I0316 00:16:16.110036  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network default is active
	I0316 00:16:16.110372  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network mk-embed-certs-666637 is active
	I0316 00:16:16.110782  123537 main.go:141] libmachine: (embed-certs-666637) Getting domain xml...
	I0316 00:16:16.111608  123537 main.go:141] libmachine: (embed-certs-666637) Creating domain...
	I0316 00:16:17.296901  123537 main.go:141] libmachine: (embed-certs-666637) Waiting to get IP...
	I0316 00:16:17.297746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.298129  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.298317  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.298111  124543 retry.go:31] will retry after 269.98852ms: waiting for machine to come up
	I0316 00:16:17.569866  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.570322  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.570349  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.570278  124543 retry.go:31] will retry after 244.711835ms: waiting for machine to come up
	I0316 00:16:16.084301  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:16.084359  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084699  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:16:16.084726  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084970  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:16:16.086868  123454 machine.go:97] duration metric: took 4m35.39093995s to provisionDockerMachine
	I0316 00:16:16.087007  123454 fix.go:56] duration metric: took 4m35.413006758s for fixHost
	I0316 00:16:16.087038  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 4m35.413320023s
	W0316 00:16:16.087068  123454 start.go:713] error starting host: provision: host is not running
	W0316 00:16:16.087236  123454 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0316 00:16:16.087249  123454 start.go:728] Will try again in 5 seconds ...
	I0316 00:16:17.816747  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.817165  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.817196  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.817109  124543 retry.go:31] will retry after 326.155242ms: waiting for machine to come up
	I0316 00:16:18.144611  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.145047  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.145081  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.145000  124543 retry.go:31] will retry after 464.805158ms: waiting for machine to come up
	I0316 00:16:18.611746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.612105  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.612140  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.612039  124543 retry.go:31] will retry after 593.718495ms: waiting for machine to come up
	I0316 00:16:19.208024  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.208444  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.208476  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.208379  124543 retry.go:31] will retry after 772.07702ms: waiting for machine to come up
	I0316 00:16:19.982326  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.982800  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.982827  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.982706  124543 retry.go:31] will retry after 846.887476ms: waiting for machine to come up
	I0316 00:16:20.830726  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:20.831144  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:20.831168  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:20.831098  124543 retry.go:31] will retry after 1.274824907s: waiting for machine to come up
	I0316 00:16:22.107855  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:22.108252  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:22.108278  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:22.108209  124543 retry.go:31] will retry after 1.41217789s: waiting for machine to come up
	I0316 00:16:21.088013  123454 start.go:360] acquireMachinesLock for no-preload-238598: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:23.522725  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:23.523143  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:23.523179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:23.523094  124543 retry.go:31] will retry after 1.567285216s: waiting for machine to come up
	I0316 00:16:25.092539  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:25.092954  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:25.092981  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:25.092941  124543 retry.go:31] will retry after 2.260428679s: waiting for machine to come up
	I0316 00:16:27.354650  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:27.355051  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:27.355082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:27.354990  124543 retry.go:31] will retry after 2.402464465s: waiting for machine to come up
	I0316 00:16:29.758774  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:29.759220  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:29.759253  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:29.759176  124543 retry.go:31] will retry after 3.63505234s: waiting for machine to come up
	I0316 00:16:34.648552  123819 start.go:364] duration metric: took 4m4.062008179s to acquireMachinesLock for "default-k8s-diff-port-313436"
	I0316 00:16:34.648628  123819 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:34.648638  123819 fix.go:54] fixHost starting: 
	I0316 00:16:34.649089  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:34.649134  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:34.667801  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0316 00:16:34.668234  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:34.668737  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:16:34.668768  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:34.669123  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:34.669349  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:34.669552  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:16:34.671100  123819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313436: state=Stopped err=<nil>
	I0316 00:16:34.671139  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	W0316 00:16:34.671297  123819 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:34.673738  123819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-313436" ...
	I0316 00:16:34.675120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Start
	I0316 00:16:34.675292  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring networks are active...
	I0316 00:16:34.676038  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network default is active
	I0316 00:16:34.676427  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network mk-default-k8s-diff-port-313436 is active
	I0316 00:16:34.676855  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Getting domain xml...
	I0316 00:16:34.677501  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Creating domain...
	I0316 00:16:33.397686  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398274  123537 main.go:141] libmachine: (embed-certs-666637) Found IP for machine: 192.168.61.91
	I0316 00:16:33.398301  123537 main.go:141] libmachine: (embed-certs-666637) Reserving static IP address...
	I0316 00:16:33.398319  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has current primary IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398829  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.398859  123537 main.go:141] libmachine: (embed-certs-666637) DBG | skip adding static IP to network mk-embed-certs-666637 - found existing host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"}
	I0316 00:16:33.398883  123537 main.go:141] libmachine: (embed-certs-666637) Reserved static IP address: 192.168.61.91
	I0316 00:16:33.398896  123537 main.go:141] libmachine: (embed-certs-666637) Waiting for SSH to be available...
	I0316 00:16:33.398905  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Getting to WaitForSSH function...
	I0316 00:16:33.401376  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.401835  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.401872  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.402054  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH client type: external
	I0316 00:16:33.402082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa (-rw-------)
	I0316 00:16:33.402113  123537 main.go:141] libmachine: (embed-certs-666637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:33.402141  123537 main.go:141] libmachine: (embed-certs-666637) DBG | About to run SSH command:
	I0316 00:16:33.402188  123537 main.go:141] libmachine: (embed-certs-666637) DBG | exit 0
	I0316 00:16:33.523353  123537 main.go:141] libmachine: (embed-certs-666637) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:33.523747  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetConfigRaw
	I0316 00:16:33.524393  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.526639  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527046  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.527080  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527278  123537 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:16:33.527509  123537 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:33.527527  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:33.527766  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.529906  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.530210  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530341  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.530596  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530816  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530953  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.531119  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.531334  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.531348  123537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:33.635573  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:33.635601  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.635879  123537 buildroot.go:166] provisioning hostname "embed-certs-666637"
	I0316 00:16:33.635905  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.636109  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.638998  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639369  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.639417  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639629  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.639795  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.639971  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.640103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.640366  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.640524  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.640543  123537 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-666637 && echo "embed-certs-666637" | sudo tee /etc/hostname
	I0316 00:16:33.757019  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-666637
	
	I0316 00:16:33.757049  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.759808  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760120  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.760154  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760375  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.760583  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760723  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760829  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.760951  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.761121  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.761144  123537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-666637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-666637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-666637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:33.873548  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:33.873587  123537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:33.873642  123537 buildroot.go:174] setting up certificates
	I0316 00:16:33.873654  123537 provision.go:84] configureAuth start
	I0316 00:16:33.873666  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.873986  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.876609  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.876976  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.877004  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.877194  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.879624  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880156  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.880185  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880300  123537 provision.go:143] copyHostCerts
	I0316 00:16:33.880359  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:33.880370  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:33.880441  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:33.880526  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:33.880534  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:33.880558  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:33.880625  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:33.880632  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:33.880653  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:33.880707  123537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.embed-certs-666637 san=[127.0.0.1 192.168.61.91 embed-certs-666637 localhost minikube]
	I0316 00:16:33.984403  123537 provision.go:177] copyRemoteCerts
	I0316 00:16:33.984471  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:33.984499  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.987297  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987711  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.987741  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987894  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.988108  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.988284  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.988456  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.069540  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:34.094494  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 00:16:34.119198  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:34.144669  123537 provision.go:87] duration metric: took 271.000471ms to configureAuth
	I0316 00:16:34.144701  123537 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:34.144891  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:34.144989  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.148055  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148464  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.148496  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148710  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.148918  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149097  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149251  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.149416  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.149580  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.149596  123537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:34.414026  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:34.414058  123537 machine.go:97] duration metric: took 886.536134ms to provisionDockerMachine
	I0316 00:16:34.414070  123537 start.go:293] postStartSetup for "embed-certs-666637" (driver="kvm2")
	I0316 00:16:34.414081  123537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:34.414101  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.414464  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:34.414497  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.417211  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417482  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.417520  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417617  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.417804  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.417990  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.418126  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.498223  123537 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:34.502954  123537 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:34.502989  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:34.503068  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:34.503156  123537 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:34.503258  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:34.513065  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:34.537606  123537 start.go:296] duration metric: took 123.521431ms for postStartSetup
	I0316 00:16:34.537657  123537 fix.go:56] duration metric: took 18.450434099s for fixHost
	I0316 00:16:34.537679  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.540574  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.540908  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.540950  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.541086  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.541302  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541471  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541609  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.541803  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.542009  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.542025  123537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:34.648381  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548194.613058580
	
	I0316 00:16:34.648419  123537 fix.go:216] guest clock: 1710548194.613058580
	I0316 00:16:34.648427  123537 fix.go:229] Guest: 2024-03-16 00:16:34.61305858 +0000 UTC Remote: 2024-03-16 00:16:34.537661993 +0000 UTC m=+286.854063579 (delta=75.396587ms)
	I0316 00:16:34.648454  123537 fix.go:200] guest clock delta is within tolerance: 75.396587ms
	I0316 00:16:34.648459  123537 start.go:83] releasing machines lock for "embed-certs-666637", held for 18.561300744s
	I0316 00:16:34.648483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.648770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:34.651350  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651748  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.651794  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651926  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652573  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652810  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652907  123537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:34.652965  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.653064  123537 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:34.653090  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.655796  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656121  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656149  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656170  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656281  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656461  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.656562  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656586  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656640  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.656739  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656807  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.656883  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.657023  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.657249  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.759596  123537 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:34.765571  123537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:34.915897  123537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:34.923372  123537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:34.923471  123537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:34.940579  123537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:34.940613  123537 start.go:494] detecting cgroup driver to use...
	I0316 00:16:34.940699  123537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:34.957640  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:34.971525  123537 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:34.971598  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:34.987985  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:35.001952  123537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:35.124357  123537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:35.273948  123537 docker.go:233] disabling docker service ...
	I0316 00:16:35.274037  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:35.291073  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:35.311209  123537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:35.460630  123537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:35.581263  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:35.596460  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:35.617992  123537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:35.618042  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.628372  123537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:35.628426  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.639487  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.650397  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.662065  123537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:35.676003  123537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:35.686159  123537 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:35.686241  123537 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:35.699814  123537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:35.710182  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:35.831831  123537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:35.977556  123537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:35.977638  123537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:35.982729  123537 start.go:562] Will wait 60s for crictl version
	I0316 00:16:35.982806  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:16:35.986695  123537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:36.023299  123537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:36.023412  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.055441  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.090313  123537 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:36.091622  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:36.094687  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095062  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:36.095098  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095277  123537 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:36.099781  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:36.113522  123537 kubeadm.go:877] updating cluster {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:36.113674  123537 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:36.113743  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:36.152208  123537 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:36.152300  123537 ssh_runner.go:195] Run: which lz4
	I0316 00:16:36.156802  123537 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:36.161430  123537 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:36.161472  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:35.911510  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting to get IP...
	I0316 00:16:35.912562  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.912986  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.913064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:35.912955  124655 retry.go:31] will retry after 248.147893ms: waiting for machine to come up
	I0316 00:16:36.162476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163094  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163127  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.163032  124655 retry.go:31] will retry after 387.219214ms: waiting for machine to come up
	I0316 00:16:36.551678  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552203  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552236  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.552178  124655 retry.go:31] will retry after 391.385671ms: waiting for machine to come up
	I0316 00:16:36.945741  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946275  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.946216  124655 retry.go:31] will retry after 470.449619ms: waiting for machine to come up
	I0316 00:16:37.417836  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418324  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418353  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.418259  124655 retry.go:31] will retry after 508.962644ms: waiting for machine to come up
	I0316 00:16:37.929194  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929710  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.929671  124655 retry.go:31] will retry after 877.538639ms: waiting for machine to come up
	I0316 00:16:38.808551  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809061  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809100  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:38.809002  124655 retry.go:31] will retry after 754.319242ms: waiting for machine to come up
	I0316 00:16:39.565060  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565475  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565512  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:39.565411  124655 retry.go:31] will retry after 1.472475348s: waiting for machine to come up
	I0316 00:16:37.946470  123537 crio.go:444] duration metric: took 1.789700065s to copy over tarball
	I0316 00:16:37.946552  123537 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:40.497841  123537 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551257887s)
	I0316 00:16:40.497867  123537 crio.go:451] duration metric: took 2.551367803s to extract the tarball
	I0316 00:16:40.497875  123537 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:40.539695  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:40.588945  123537 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:40.588974  123537 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:40.588983  123537 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.28.4 crio true true} ...
	I0316 00:16:40.589125  123537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-666637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:40.589216  123537 ssh_runner.go:195] Run: crio config
	I0316 00:16:40.641673  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:40.641702  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:40.641719  123537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:40.641754  123537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-666637 NodeName:embed-certs-666637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:40.641939  123537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-666637"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:40.642024  123537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:40.652461  123537 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:40.652539  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:40.662114  123537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 00:16:40.679782  123537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:40.701982  123537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0316 00:16:40.720088  123537 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:40.724199  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:40.737133  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:40.860343  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:40.878437  123537 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637 for IP: 192.168.61.91
	I0316 00:16:40.878466  123537 certs.go:194] generating shared ca certs ...
	I0316 00:16:40.878489  123537 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:40.878690  123537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:40.878766  123537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:40.878779  123537 certs.go:256] generating profile certs ...
	I0316 00:16:40.878888  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/client.key
	I0316 00:16:40.878990  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key.07955952
	I0316 00:16:40.879059  123537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key
	I0316 00:16:40.879178  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:40.879225  123537 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:40.879239  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:40.879271  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:40.879302  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:40.879352  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:40.879409  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:40.880141  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:40.924047  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:40.962441  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:41.000283  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:41.034353  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 00:16:41.069315  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:16:41.100325  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:16:41.129285  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:16:41.155899  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:16:41.180657  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:16:41.205961  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:16:41.231886  123537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:16:41.249785  123537 ssh_runner.go:195] Run: openssl version
	I0316 00:16:41.255703  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:16:41.266968  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271536  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271595  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.277460  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:16:41.288854  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:16:41.300302  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305189  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305256  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.311200  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:16:41.322784  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:16:41.334879  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339774  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339837  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.345746  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:16:41.357661  123537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:16:41.362469  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:16:41.368875  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:16:41.375759  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:16:41.382518  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:16:41.388629  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:16:41.394882  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:16:41.401114  123537 kubeadm.go:391] StartCluster: {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:16:41.401243  123537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:16:41.401304  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.449499  123537 cri.go:89] found id: ""
	I0316 00:16:41.449590  123537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:16:41.461139  123537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:16:41.461165  123537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:16:41.461173  123537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:16:41.461243  123537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:16:41.473648  123537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:16:41.474652  123537 kubeconfig.go:125] found "embed-certs-666637" server: "https://192.168.61.91:8443"
	I0316 00:16:41.476724  123537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:16:41.488387  123537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0316 00:16:41.488426  123537 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:16:41.488439  123537 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:16:41.488485  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.526197  123537 cri.go:89] found id: ""
	I0316 00:16:41.526283  123537 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:16:41.545489  123537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:16:41.555977  123537 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:16:41.555998  123537 kubeadm.go:156] found existing configuration files:
	
	I0316 00:16:41.556048  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:16:41.565806  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:16:41.565891  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:16:41.575646  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:16:41.585269  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:16:41.585329  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:16:41.595336  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.605081  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:16:41.605144  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.615182  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:16:41.624781  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:16:41.624837  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:16:41.634852  123537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:16:41.644749  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.748782  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.477775  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.688730  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.039441  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039924  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:41.039885  124655 retry.go:31] will retry after 1.408692905s: waiting for machine to come up
	I0316 00:16:42.449971  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450402  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:42.450355  124655 retry.go:31] will retry after 1.539639877s: waiting for machine to come up
	I0316 00:16:43.992314  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992833  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992869  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:43.992777  124655 retry.go:31] will retry after 2.297369864s: waiting for machine to come up
	I0316 00:16:42.777223  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.944089  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:16:42.944193  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.445082  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.945117  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.963812  123537 api_server.go:72] duration metric: took 1.019723734s to wait for apiserver process to appear ...
	I0316 00:16:43.963845  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:16:43.963871  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.924208  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.924258  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.924278  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.953212  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.953245  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.964449  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.988201  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.988232  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:47.464502  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.469385  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.469421  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:47.964483  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.970448  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.970492  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:48.463984  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:48.468908  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:16:48.476120  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:16:48.476153  123537 api_server.go:131] duration metric: took 4.512298176s to wait for apiserver health ...
	I0316 00:16:48.476164  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:48.476172  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:48.478076  123537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:16:48.479565  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:16:48.490129  123537 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:16:48.516263  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:16:48.532732  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:16:48.532768  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:16:48.532778  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:16:48.532788  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:16:48.532795  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:16:48.532801  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:16:48.532808  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:16:48.532815  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:16:48.532822  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:16:48.532833  123537 system_pods.go:74] duration metric: took 16.547677ms to wait for pod list to return data ...
	I0316 00:16:48.532845  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:16:48.535945  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:16:48.535989  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:16:48.536006  123537 node_conditions.go:105] duration metric: took 3.154184ms to run NodePressure ...
	I0316 00:16:48.536027  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:48.733537  123537 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739166  123537 kubeadm.go:733] kubelet initialised
	I0316 00:16:48.739196  123537 kubeadm.go:734] duration metric: took 5.63118ms waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739209  123537 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:48.744724  123537 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.750261  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750299  123537 pod_ready.go:81] duration metric: took 5.547917ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.750310  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750323  123537 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.755340  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755362  123537 pod_ready.go:81] duration metric: took 5.029639ms for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.755371  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755379  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.761104  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761128  123537 pod_ready.go:81] duration metric: took 5.740133ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.761138  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761146  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.921215  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921244  123537 pod_ready.go:81] duration metric: took 160.08501ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.921254  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921260  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.319922  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319954  123537 pod_ready.go:81] duration metric: took 398.685799ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.319963  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319969  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.720866  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720922  123537 pod_ready.go:81] duration metric: took 400.944023ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.720948  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720967  123537 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:50.120836  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120865  123537 pod_ready.go:81] duration metric: took 399.883676ms for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:50.120875  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120882  123537 pod_ready.go:38] duration metric: took 1.381661602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:50.120923  123537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:16:50.133619  123537 ops.go:34] apiserver oom_adj: -16
	I0316 00:16:50.133653  123537 kubeadm.go:591] duration metric: took 8.672472438s to restartPrimaryControlPlane
	I0316 00:16:50.133663  123537 kubeadm.go:393] duration metric: took 8.732557685s to StartCluster
	I0316 00:16:50.133684  123537 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.133760  123537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:16:50.135355  123537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.135613  123537 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:16:50.140637  123537 out.go:177] * Verifying Kubernetes components...
	I0316 00:16:50.135727  123537 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:16:50.135843  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:50.142015  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:50.142027  123537 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-666637"
	I0316 00:16:50.142050  123537 addons.go:69] Setting default-storageclass=true in profile "embed-certs-666637"
	I0316 00:16:50.142070  123537 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-666637"
	W0316 00:16:50.142079  123537 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:16:50.142090  123537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-666637"
	I0316 00:16:50.142092  123537 addons.go:69] Setting metrics-server=true in profile "embed-certs-666637"
	I0316 00:16:50.142121  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142124  123537 addons.go:234] Setting addon metrics-server=true in "embed-certs-666637"
	W0316 00:16:50.142136  123537 addons.go:243] addon metrics-server should already be in state true
	I0316 00:16:50.142168  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142439  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142468  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142558  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142577  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.156773  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0316 00:16:50.156804  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0316 00:16:50.157267  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157268  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157591  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0316 00:16:50.157835  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157841  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157857  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157858  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157925  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.158223  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158226  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158404  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.158419  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.158731  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158753  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158795  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158828  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158932  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.159126  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.162347  123537 addons.go:234] Setting addon default-storageclass=true in "embed-certs-666637"
	W0316 00:16:50.162365  123537 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:16:50.162392  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.162612  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.162649  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.172299  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0316 00:16:50.172676  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.173173  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.173193  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.173547  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.173770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.175668  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.177676  123537 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:16:50.175968  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0316 00:16:50.176110  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0316 00:16:50.179172  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:16:50.179189  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:16:50.179206  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.179453  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179538  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179888  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.179909  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180021  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.180037  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180266  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180385  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180613  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.180788  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.180811  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.185060  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.192504  123537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:16:46.292804  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293326  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293363  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:46.293267  124655 retry.go:31] will retry after 2.301997121s: waiting for machine to come up
	I0316 00:16:48.596337  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596777  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:48.596731  124655 retry.go:31] will retry after 3.159447069s: waiting for machine to come up
	I0316 00:16:50.186146  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.186717  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.193945  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.193971  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.194051  123537 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.194079  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:16:50.194100  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.194103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.194264  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.194420  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.196511  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0316 00:16:50.197160  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.197580  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.197598  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.197658  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198007  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.198039  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.198038  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198235  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.198237  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.198435  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.198612  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.198772  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.200270  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.200540  123537 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.200554  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:16:50.200566  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.203147  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203634  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.203655  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203765  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.203966  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.204201  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.204335  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.317046  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:50.340203  123537 node_ready.go:35] waiting up to 6m0s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:50.415453  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.423732  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.424648  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:16:50.424663  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:16:50.470134  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:16:50.470164  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:16:50.518806  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:50.518833  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:16:50.570454  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:51.627153  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203388401s)
	I0316 00:16:51.627211  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627222  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627419  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211925303s)
	I0316 00:16:51.627468  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627533  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627595  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627609  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627620  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627549  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627859  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627885  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627895  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627914  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627956  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627976  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.629345  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.633811  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.633831  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.634043  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.634081  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726400  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.15588774s)
	I0316 00:16:51.726458  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726472  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.726820  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.726853  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.726875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726889  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726898  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.727178  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.727193  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.727206  123537 addons.go:470] Verifying addon metrics-server=true in "embed-certs-666637"
	I0316 00:16:51.729277  123537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0316 00:16:51.730645  123537 addons.go:505] duration metric: took 1.594919212s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0316 00:16:52.344107  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:51.757133  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757570  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Found IP for machine: 192.168.72.198
	I0316 00:16:51.757603  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has current primary IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserving static IP address...
	I0316 00:16:51.758067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.758093  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | skip adding static IP to network mk-default-k8s-diff-port-313436 - found existing host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"}
	I0316 00:16:51.758110  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserved static IP address: 192.168.72.198
	I0316 00:16:51.758120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Getting to WaitForSSH function...
	I0316 00:16:51.758138  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for SSH to be available...
	I0316 00:16:51.760276  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760596  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.760632  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760711  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH client type: external
	I0316 00:16:51.760744  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa (-rw-------)
	I0316 00:16:51.760797  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:51.760820  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | About to run SSH command:
	I0316 00:16:51.760861  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | exit 0
	I0316 00:16:51.887432  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:51.887829  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetConfigRaw
	I0316 00:16:51.888471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:51.891514  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.891923  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.891949  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.892232  123819 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:16:51.892502  123819 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:51.892527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:51.892782  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:51.895025  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.895367  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:51.895683  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895841  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:51.896178  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:51.896361  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:51.896372  123819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:52.012107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:52.012154  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012405  123819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-313436"
	I0316 00:16:52.012434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012640  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.015307  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.015823  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.015847  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.016055  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.016266  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016433  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016565  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.016758  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.016976  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.016992  123819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313436 && echo "default-k8s-diff-port-313436" | sudo tee /etc/hostname
	I0316 00:16:52.149152  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313436
	
	I0316 00:16:52.149180  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.152472  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.152852  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.152896  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.153056  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.153239  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153412  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.153837  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.154077  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.154108  123819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:52.285258  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:52.285290  123819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:52.285313  123819 buildroot.go:174] setting up certificates
	I0316 00:16:52.285323  123819 provision.go:84] configureAuth start
	I0316 00:16:52.285331  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.285631  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:52.288214  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288494  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.288527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288699  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.290965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291354  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.291380  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291571  123819 provision.go:143] copyHostCerts
	I0316 00:16:52.291644  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:52.291658  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:52.291719  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:52.291827  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:52.291839  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:52.291868  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:52.291966  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:52.291978  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:52.292005  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:52.292095  123819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313436 san=[127.0.0.1 192.168.72.198 default-k8s-diff-port-313436 localhost minikube]
	I0316 00:16:52.536692  123819 provision.go:177] copyRemoteCerts
	I0316 00:16:52.536756  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:52.536790  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.539525  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.539805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.539837  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.540067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.540264  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.540424  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.540599  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:52.629139  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:52.655092  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0316 00:16:52.681372  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:52.706496  123819 provision.go:87] duration metric: took 421.160351ms to configureAuth
	I0316 00:16:52.706529  123819 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:52.706737  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:52.706828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.709743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710173  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.710198  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710403  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.710616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710822  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710983  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.711148  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.711359  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.711380  123819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:53.005107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:53.005138  123819 machine.go:97] duration metric: took 1.112619102s to provisionDockerMachine
	I0316 00:16:53.005153  123819 start.go:293] postStartSetup for "default-k8s-diff-port-313436" (driver="kvm2")
	I0316 00:16:53.005166  123819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:53.005185  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.005547  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:53.005581  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.008749  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009170  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.009196  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009416  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.009617  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.009795  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.009973  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.100468  123819 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:53.105158  123819 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:53.105181  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:53.105243  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:53.105314  123819 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:53.105399  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:53.116078  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:53.142400  123819 start.go:296] duration metric: took 137.231635ms for postStartSetup
	I0316 00:16:53.142454  123819 fix.go:56] duration metric: took 18.493815855s for fixHost
	I0316 00:16:53.142483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.145282  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145658  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.145688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145878  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.146104  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146288  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146445  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.146625  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:53.146820  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:53.146834  123819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:53.260232  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548213.237261690
	
	I0316 00:16:53.260255  123819 fix.go:216] guest clock: 1710548213.237261690
	I0316 00:16:53.260262  123819 fix.go:229] Guest: 2024-03-16 00:16:53.23726169 +0000 UTC Remote: 2024-03-16 00:16:53.142460792 +0000 UTC m=+262.706636561 (delta=94.800898ms)
	I0316 00:16:53.260292  123819 fix.go:200] guest clock delta is within tolerance: 94.800898ms
	I0316 00:16:53.260298  123819 start.go:83] releasing machines lock for "default-k8s-diff-port-313436", held for 18.611697781s
	I0316 00:16:53.260323  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.260629  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:53.263641  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264002  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.264032  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.264889  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265217  123819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:53.265273  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.265404  123819 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:53.265434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.268274  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268538  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268684  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268727  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.268969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268995  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.269113  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269206  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.269298  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269419  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.269476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269572  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.372247  123819 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:53.378643  123819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:53.527036  123819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:53.534220  123819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:53.534312  123819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:53.554856  123819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:53.554900  123819 start.go:494] detecting cgroup driver to use...
	I0316 00:16:53.554971  123819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:53.580723  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:53.599919  123819 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:53.599996  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:53.613989  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:53.628748  123819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:53.745409  123819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:53.906668  123819 docker.go:233] disabling docker service ...
	I0316 00:16:53.906733  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:53.928452  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:53.949195  123819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:54.118868  123819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:54.250006  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:54.264754  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:54.285825  123819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:54.285890  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.298522  123819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:54.298590  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.311118  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.323928  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.336128  123819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:54.348715  123819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:54.359657  123819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:54.359718  123819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:54.376411  123819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:54.388136  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:54.530444  123819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:54.681895  123819 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:54.681984  123819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:54.687334  123819 start.go:562] Will wait 60s for crictl version
	I0316 00:16:54.687398  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:16:54.691443  123819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:54.730408  123819 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:54.730505  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.761591  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.792351  123819 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:54.793693  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:54.797023  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797439  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:54.797471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797665  123819 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:54.802065  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:54.815168  123819 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:54.815285  123819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:54.815345  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:54.855493  123819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:54.855553  123819 ssh_runner.go:195] Run: which lz4
	I0316 00:16:54.860096  123819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:54.865644  123819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:54.865675  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:54.345117  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:56.346342  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:57.346164  123537 node_ready.go:49] node "embed-certs-666637" has status "Ready":"True"
	I0316 00:16:57.346194  123537 node_ready.go:38] duration metric: took 7.005950923s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:57.346207  123537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:57.361331  123537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377726  123537 pod_ready.go:92] pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace has status "Ready":"True"
	I0316 00:16:57.377750  123537 pod_ready.go:81] duration metric: took 16.388353ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377760  123537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:16:56.676506  123819 crio.go:444] duration metric: took 1.816442841s to copy over tarball
	I0316 00:16:56.676609  123819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:59.338617  123819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661966532s)
	I0316 00:16:59.338655  123819 crio.go:451] duration metric: took 2.662115388s to extract the tarball
	I0316 00:16:59.338665  123819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:59.387693  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:59.453534  123819 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:59.453565  123819 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:59.453575  123819 kubeadm.go:928] updating node { 192.168.72.198 8444 v1.28.4 crio true true} ...
	I0316 00:16:59.453744  123819 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-313436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:59.453841  123819 ssh_runner.go:195] Run: crio config
	I0316 00:16:59.518492  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:16:59.518525  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:59.518543  123819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:59.518572  123819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.198 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313436 NodeName:default-k8s-diff-port-313436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:59.518791  123819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.198
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313436"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:59.518876  123819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:59.529778  123819 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:59.529860  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:59.542186  123819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0316 00:16:59.563037  123819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:59.585167  123819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 00:16:59.607744  123819 ssh_runner.go:195] Run: grep 192.168.72.198	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:59.612687  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:59.628607  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:59.767487  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:59.786494  123819 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436 for IP: 192.168.72.198
	I0316 00:16:59.786520  123819 certs.go:194] generating shared ca certs ...
	I0316 00:16:59.786545  123819 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:59.786688  123819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:59.786722  123819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:59.786728  123819 certs.go:256] generating profile certs ...
	I0316 00:16:59.786827  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.key
	I0316 00:16:59.786975  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key.254d5830
	I0316 00:16:59.787049  123819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key
	I0316 00:16:59.787204  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:59.787248  123819 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:59.787262  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:59.787295  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:59.787351  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:59.787386  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:59.787449  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:59.788288  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:59.824257  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:59.859470  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:59.904672  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:59.931832  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0316 00:16:59.965654  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:00.006949  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:00.039120  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:00.071341  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:00.095585  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:00.122165  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:00.149982  123819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:00.170019  123819 ssh_runner.go:195] Run: openssl version
	I0316 00:17:00.176232  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:00.188738  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193708  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193780  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.200433  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:00.215116  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:00.228871  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234074  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234141  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.240553  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:00.252454  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:00.264690  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269493  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269573  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.275584  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:00.287859  123819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:00.292474  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:00.298744  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:00.304793  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:00.311156  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:00.317777  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:00.324148  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:00.330667  123819 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:00.330763  123819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:00.330813  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.374868  123819 cri.go:89] found id: ""
	I0316 00:17:00.374961  123819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:00.386218  123819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:00.386240  123819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:00.386245  123819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:00.386288  123819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:00.397129  123819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:00.398217  123819 kubeconfig.go:125] found "default-k8s-diff-port-313436" server: "https://192.168.72.198:8444"
	I0316 00:17:00.400506  123819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:00.411430  123819 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.198
	I0316 00:17:00.411462  123819 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:00.411477  123819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:00.411528  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.448545  123819 cri.go:89] found id: ""
	I0316 00:17:00.448619  123819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:00.469230  123819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:00.480622  123819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:00.480644  123819 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:00.480695  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0316 00:16:59.384420  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.094272  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.390117  123537 pod_ready.go:92] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.390145  123537 pod_ready.go:81] duration metric: took 5.012377671s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.390156  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398207  123537 pod_ready.go:92] pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.398236  123537 pod_ready.go:81] duration metric: took 8.071855ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398248  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405415  123537 pod_ready.go:92] pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.405443  123537 pod_ready.go:81] duration metric: took 7.186495ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405453  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412646  123537 pod_ready.go:92] pod "kube-proxy-8fpc5" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.412665  123537 pod_ready.go:81] duration metric: took 7.204465ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412673  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606336  123537 pod_ready.go:92] pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.606369  123537 pod_ready.go:81] duration metric: took 193.687951ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606384  123537 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:00.492088  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:00.743504  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:00.756322  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0316 00:17:00.766476  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:00.766545  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:00.776849  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.786610  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:00.786676  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.797455  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0316 00:17:00.808026  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:00.808083  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:00.819306  123819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:00.834822  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:00.962203  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.535753  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.762322  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.843195  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.944855  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:01.944971  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.446047  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.945791  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.983641  123819 api_server.go:72] duration metric: took 1.038786332s to wait for apiserver process to appear ...
	I0316 00:17:02.983680  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:02.983704  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:04.615157  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:07.114447  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:06.343729  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.343763  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.343786  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.364621  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.364659  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.483852  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.491403  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.491433  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:06.983931  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.994258  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.994296  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.483821  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.506265  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:07.506301  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.983846  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.988700  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:17:07.995996  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:17:07.996021  123819 api_server.go:131] duration metric: took 5.012333318s to wait for apiserver health ...
	I0316 00:17:07.996032  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:17:07.996041  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:07.998091  123819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:17:07.999628  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:17:08.010263  123819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:17:08.041667  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:17:08.053611  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:17:08.053656  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:17:08.053668  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:17:08.053681  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:17:08.053694  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:17:08.053706  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:17:08.053717  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:17:08.053730  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:17:08.053739  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:17:08.053747  123819 system_pods.go:74] duration metric: took 12.054433ms to wait for pod list to return data ...
	I0316 00:17:08.053763  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:17:08.057781  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:17:08.057808  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:17:08.057818  123819 node_conditions.go:105] duration metric: took 4.047698ms to run NodePressure ...
	I0316 00:17:08.057837  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:08.282870  123819 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288338  123819 kubeadm.go:733] kubelet initialised
	I0316 00:17:08.288359  123819 kubeadm.go:734] duration metric: took 5.456436ms waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288367  123819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:08.294256  123819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.302762  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302802  123819 pod_ready.go:81] duration metric: took 8.523485ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.302814  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302823  123819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.309581  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309604  123819 pod_ready.go:81] duration metric: took 6.77179ms for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.309617  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309625  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.315399  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315419  123819 pod_ready.go:81] duration metric: took 5.78558ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.315428  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315434  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.445776  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445808  123819 pod_ready.go:81] duration metric: took 130.363739ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.445821  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445829  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.846181  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846228  123819 pod_ready.go:81] duration metric: took 400.382095ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.846243  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846251  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.245568  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245599  123819 pod_ready.go:81] duration metric: took 399.329058ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.245612  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245618  123819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.646855  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646888  123819 pod_ready.go:81] duration metric: took 401.262603ms for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.646901  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646909  123819 pod_ready.go:38] duration metric: took 1.358531936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:09.646926  123819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:17:09.659033  123819 ops.go:34] apiserver oom_adj: -16
	I0316 00:17:09.659059  123819 kubeadm.go:591] duration metric: took 9.272806311s to restartPrimaryControlPlane
	I0316 00:17:09.659070  123819 kubeadm.go:393] duration metric: took 9.328414192s to StartCluster
	I0316 00:17:09.659091  123819 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.659166  123819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:09.661439  123819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.661729  123819 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:17:09.663462  123819 out.go:177] * Verifying Kubernetes components...
	I0316 00:17:09.661800  123819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:17:09.661986  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:17:09.664841  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:09.664874  123819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664839  123819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664964  123819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.664980  123819 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:17:09.664847  123819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.665023  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.665037  123819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.665053  123819 addons.go:243] addon metrics-server should already be in state true
	I0316 00:17:09.665084  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.664922  123819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313436"
	I0316 00:17:09.665349  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665377  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665445  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665474  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665607  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665637  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.680337  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0316 00:17:09.680351  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0316 00:17:09.680799  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.680939  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.681331  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681366  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681541  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681560  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681736  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.681974  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.682359  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682407  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.682461  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682494  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.683660  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0316 00:17:09.684088  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.684575  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.684600  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.684992  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.685218  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.688973  123819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.688994  123819 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:17:09.689028  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.689372  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.689397  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.698126  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0316 00:17:09.698527  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.699052  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.699079  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.699407  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.699606  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.700389  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0316 00:17:09.700824  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.701308  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.701327  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.701610  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.701681  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.704168  123819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:17:09.701891  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.704403  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0316 00:17:09.706042  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:17:09.706076  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:17:09.706102  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.706988  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.707805  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.707831  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.708465  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.708556  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.709451  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.709500  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.709520  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.711354  123819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:09.709911  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.710103  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.712849  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.712865  123819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:09.712886  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:17:09.712910  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.713010  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.713202  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.713365  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.715688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716029  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.716064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716260  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.716437  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.716662  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.716826  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.725309  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0316 00:17:09.725659  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.726175  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.726191  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.726492  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.726665  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.728459  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.728721  123819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.728739  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:17:09.728753  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.732122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732546  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.732576  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732733  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.732908  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.733064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.733206  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.838182  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:09.857248  123819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:09.956751  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:17:09.956775  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:17:09.982142  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.992293  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:17:09.992319  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:17:10.000878  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:10.035138  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:10.035171  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:17:10.066721  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:11.153759  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171576504s)
	I0316 00:17:11.153815  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.153828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154237  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154241  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154262  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.154271  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.154281  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154569  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154601  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154609  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165531  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.165579  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.165868  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.165922  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165879  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536530  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.469764101s)
	I0316 00:17:11.536596  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536607  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536648  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53572281s)
	I0316 00:17:11.536694  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536713  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536963  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536988  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536995  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537001  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537005  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537010  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537013  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537019  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537218  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537365  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537376  123819 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-313436"
	I0316 00:17:11.537404  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537425  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.539481  123819 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0316 00:17:09.114699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:11.613507  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:13.204814  123454 start.go:364] duration metric: took 52.116735477s to acquireMachinesLock for "no-preload-238598"
	I0316 00:17:13.204888  123454 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:17:13.204900  123454 fix.go:54] fixHost starting: 
	I0316 00:17:13.205405  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:13.205446  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:13.222911  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0316 00:17:13.223326  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:13.223784  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:17:13.223811  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:13.224153  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:13.224338  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:13.224507  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:17:13.226028  123454 fix.go:112] recreateIfNeeded on no-preload-238598: state=Stopped err=<nil>
	I0316 00:17:13.226051  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	W0316 00:17:13.226232  123454 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:17:13.227865  123454 out.go:177] * Restarting existing kvm2 VM for "no-preload-238598" ...
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:11.540876  123819 addons.go:505] duration metric: took 1.87908534s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0316 00:17:11.862772  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.866333  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.229181  123454 main.go:141] libmachine: (no-preload-238598) Calling .Start
	I0316 00:17:13.229409  123454 main.go:141] libmachine: (no-preload-238598) Ensuring networks are active...
	I0316 00:17:13.230257  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network default is active
	I0316 00:17:13.230618  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network mk-no-preload-238598 is active
	I0316 00:17:13.231135  123454 main.go:141] libmachine: (no-preload-238598) Getting domain xml...
	I0316 00:17:13.232023  123454 main.go:141] libmachine: (no-preload-238598) Creating domain...
	I0316 00:17:14.513800  123454 main.go:141] libmachine: (no-preload-238598) Waiting to get IP...
	I0316 00:17:14.514838  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.515446  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.515520  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.515407  125029 retry.go:31] will retry after 275.965955ms: waiting for machine to come up
	I0316 00:17:14.793095  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.793594  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.793721  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.793667  125029 retry.go:31] will retry after 347.621979ms: waiting for machine to come up
	I0316 00:17:15.143230  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.143869  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.143909  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.143820  125029 retry.go:31] will retry after 301.441766ms: waiting for machine to come up
	I0316 00:17:15.446476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.446917  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.446964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.446865  125029 retry.go:31] will retry after 431.207345ms: waiting for machine to come up
	I0316 00:17:13.615911  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.616381  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:17.618352  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:16.362143  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:16.866488  123819 node_ready.go:49] node "default-k8s-diff-port-313436" has status "Ready":"True"
	I0316 00:17:16.866522  123819 node_ready.go:38] duration metric: took 7.00923342s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:16.866535  123819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:16.881909  123819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897574  123819 pod_ready.go:92] pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:16.897617  123819 pod_ready.go:81] duration metric: took 15.618728ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897630  123819 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:18.910740  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.879693  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.880186  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.880222  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.880148  125029 retry.go:31] will retry after 747.650888ms: waiting for machine to come up
	I0316 00:17:16.629378  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:16.631312  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:16.631352  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:16.631193  125029 retry.go:31] will retry after 670.902171ms: waiting for machine to come up
	I0316 00:17:17.304282  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:17.304704  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:17.304751  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:17.304658  125029 retry.go:31] will retry after 1.160879196s: waiting for machine to come up
	I0316 00:17:18.466662  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:18.467103  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:18.467136  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:18.467049  125029 retry.go:31] will retry after 948.597188ms: waiting for machine to come up
	I0316 00:17:19.417144  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:19.417623  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:19.417657  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:19.417561  125029 retry.go:31] will retry after 1.263395738s: waiting for machine to come up
	I0316 00:17:20.289713  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.613643  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:21.865146  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.241535  123819 pod_ready.go:92] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.241561  123819 pod_ready.go:81] duration metric: took 5.34392174s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.241573  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247469  123819 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.247501  123819 pod_ready.go:81] duration metric: took 5.919787ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247515  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756151  123819 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.756180  123819 pod_ready.go:81] duration metric: took 508.652978ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756194  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762214  123819 pod_ready.go:92] pod "kube-proxy-btmmm" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.762254  123819 pod_ready.go:81] duration metric: took 6.041426ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762268  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769644  123819 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.769668  123819 pod_ready.go:81] duration metric: took 7.391813ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769681  123819 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:24.780737  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.682443  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:20.798804  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:20.798840  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:20.682821  125029 retry.go:31] will retry after 1.834378571s: waiting for machine to come up
	I0316 00:17:22.518539  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:22.518997  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:22.519027  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:22.518945  125029 retry.go:31] will retry after 1.944866033s: waiting for machine to come up
	I0316 00:17:24.466332  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:24.466902  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:24.466930  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:24.466847  125029 retry.go:31] will retry after 3.4483736s: waiting for machine to come up
	I0316 00:17:24.615642  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.113920  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.278017  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:29.777128  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.919457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:27.919931  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:27.919964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:27.919891  125029 retry.go:31] will retry after 3.122442649s: waiting for machine to come up
	I0316 00:17:29.613500  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.613674  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.276855  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:34.277228  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.044512  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:31.044939  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:31.044970  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:31.044884  125029 retry.go:31] will retry after 4.529863895s: waiting for machine to come up
	I0316 00:17:34.112266  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:36.118023  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:35.576311  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.576834  123454 main.go:141] libmachine: (no-preload-238598) Found IP for machine: 192.168.50.137
	I0316 00:17:35.576858  123454 main.go:141] libmachine: (no-preload-238598) Reserving static IP address...
	I0316 00:17:35.576875  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has current primary IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.577312  123454 main.go:141] libmachine: (no-preload-238598) Reserved static IP address: 192.168.50.137
	I0316 00:17:35.577355  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.577365  123454 main.go:141] libmachine: (no-preload-238598) Waiting for SSH to be available...
	I0316 00:17:35.577404  123454 main.go:141] libmachine: (no-preload-238598) DBG | skip adding static IP to network mk-no-preload-238598 - found existing host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"}
	I0316 00:17:35.577419  123454 main.go:141] libmachine: (no-preload-238598) DBG | Getting to WaitForSSH function...
	I0316 00:17:35.579640  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580061  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.580108  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580210  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH client type: external
	I0316 00:17:35.580269  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa (-rw-------)
	I0316 00:17:35.580303  123454 main.go:141] libmachine: (no-preload-238598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:35.580319  123454 main.go:141] libmachine: (no-preload-238598) DBG | About to run SSH command:
	I0316 00:17:35.580339  123454 main.go:141] libmachine: (no-preload-238598) DBG | exit 0
	I0316 00:17:35.711373  123454 main.go:141] libmachine: (no-preload-238598) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:35.711791  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetConfigRaw
	I0316 00:17:35.712598  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:35.715455  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.715929  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.715954  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.716326  123454 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:17:35.716525  123454 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:35.716551  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:35.716802  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.719298  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719612  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.719644  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719780  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.720005  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720178  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720315  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.720487  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.720666  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.720677  123454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:35.835733  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:35.835760  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836004  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:17:35.836033  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836240  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.839024  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839413  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.839445  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839627  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.839811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.839977  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.840133  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.840279  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.840485  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.840504  123454 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-238598 && echo "no-preload-238598" | sudo tee /etc/hostname
	I0316 00:17:35.976590  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-238598
	
	I0316 00:17:35.976624  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.979354  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979689  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.979720  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979879  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.980104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980267  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980445  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.980602  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.980796  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.980815  123454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-238598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-238598/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-238598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:36.106710  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:36.106750  123454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:36.106774  123454 buildroot.go:174] setting up certificates
	I0316 00:17:36.106786  123454 provision.go:84] configureAuth start
	I0316 00:17:36.106800  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:36.107104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.110050  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110431  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.110476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110592  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.113019  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113366  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.113391  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113517  123454 provision.go:143] copyHostCerts
	I0316 00:17:36.113595  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:36.113619  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:36.113699  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:36.113898  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:36.113911  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:36.113964  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:36.114051  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:36.114063  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:36.114089  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:36.114155  123454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.no-preload-238598 san=[127.0.0.1 192.168.50.137 localhost minikube no-preload-238598]
	I0316 00:17:36.239622  123454 provision.go:177] copyRemoteCerts
	I0316 00:17:36.239706  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:36.239736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.242440  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.242806  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.242841  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.243086  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.243279  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.243482  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.243623  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.330601  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:36.359600  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:17:36.384258  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:36.409195  123454 provision.go:87] duration metric: took 302.39571ms to configureAuth
	I0316 00:17:36.409239  123454 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:36.409440  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:17:36.409539  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.412280  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412618  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.412652  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.413039  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413217  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413366  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.413576  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.413803  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.413823  123454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:36.703300  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:36.703365  123454 machine.go:97] duration metric: took 986.82471ms to provisionDockerMachine
	I0316 00:17:36.703418  123454 start.go:293] postStartSetup for "no-preload-238598" (driver="kvm2")
	I0316 00:17:36.703440  123454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:36.703474  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.703838  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:36.703880  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.706655  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707019  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.707057  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707237  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.707470  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.707626  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.707822  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.794605  123454 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:36.799121  123454 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:36.799151  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:36.799222  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:36.799298  123454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:36.799423  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:36.808805  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:36.834244  123454 start.go:296] duration metric: took 130.803052ms for postStartSetup
	I0316 00:17:36.834290  123454 fix.go:56] duration metric: took 23.629390369s for fixHost
	I0316 00:17:36.834318  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.837197  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837643  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.837684  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837926  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.838155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838360  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838533  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.838721  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.838965  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.838982  123454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:36.956309  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548256.900043121
	
	I0316 00:17:36.956352  123454 fix.go:216] guest clock: 1710548256.900043121
	I0316 00:17:36.956366  123454 fix.go:229] Guest: 2024-03-16 00:17:36.900043121 +0000 UTC Remote: 2024-03-16 00:17:36.83429667 +0000 UTC m=+356.318603082 (delta=65.746451ms)
	I0316 00:17:36.956398  123454 fix.go:200] guest clock delta is within tolerance: 65.746451ms
	I0316 00:17:36.956425  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 23.751563248s
	I0316 00:17:36.956472  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.956736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.960077  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960494  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.960524  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960678  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961247  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961454  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961522  123454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:36.961588  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.961730  123454 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:36.961756  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.964457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964801  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.964834  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964905  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965346  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965374  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.965406  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965518  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.965609  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965681  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.965739  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965866  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.966034  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:37.077559  123454 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:37.084485  123454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:37.229503  123454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:37.236783  123454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:37.236862  123454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:37.255248  123454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:37.255275  123454 start.go:494] detecting cgroup driver to use...
	I0316 00:17:37.255377  123454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:37.272795  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:37.289822  123454 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:37.289885  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:37.306082  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:37.322766  123454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:37.448135  123454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:37.614316  123454 docker.go:233] disabling docker service ...
	I0316 00:17:37.614381  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:37.630091  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:37.645025  123454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:37.773009  123454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:37.891459  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:37.906829  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:37.927910  123454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:17:37.927982  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.939166  123454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:37.939226  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.950487  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.961547  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.972402  123454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:37.983413  123454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:37.993080  123454 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:37.993147  123454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:38.007746  123454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:38.017917  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:38.158718  123454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:38.329423  123454 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:38.329520  123454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:38.334518  123454 start.go:562] Will wait 60s for crictl version
	I0316 00:17:38.334570  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.338570  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:38.375688  123454 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:38.375779  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.408167  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.444754  123454 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.277480  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.281375  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.446078  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:38.448885  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449299  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:38.449329  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449565  123454 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:38.453922  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:38.467515  123454 kubeadm.go:877] updating cluster {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:38.467646  123454 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:17:38.467690  123454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:38.511057  123454 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:17:38.511093  123454 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:38.511189  123454 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.511221  123454 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0316 00:17:38.511240  123454 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.511253  123454 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.511305  123454 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.511335  123454 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.511338  123454 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.511188  123454 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.512934  123454 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.512949  123454 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.512953  123454 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0316 00:17:38.513014  123454 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.648129  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.650306  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.661334  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0316 00:17:38.666656  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.669280  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.684494  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.690813  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.760339  123454 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0316 00:17:38.760396  123454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.760449  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.760545  123454 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0316 00:17:38.760585  123454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.760641  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908463  123454 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0316 00:17:38.908491  123454 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0316 00:17:38.908515  123454 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.908525  123454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908579  123454 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0316 00:17:38.908607  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.908615  123454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.908585  123454 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908638  123454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.908739  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.954587  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.954611  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.954699  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.961857  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.961878  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0316 00:17:38.961979  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:38.962005  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.962010  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:39.052859  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.052888  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0316 00:17:39.052907  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.052958  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.052976  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.053001  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0316 00:17:39.052963  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.053055  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.053060  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0316 00:17:39.053100  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:39.053156  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.053235  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.120914  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.612614  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.779012  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:43.278631  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:41.133735  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.080597621s)
	I0316 00:17:41.133778  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0316 00:17:41.133890  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.080807025s)
	I0316 00:17:41.133924  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0316 00:17:41.133942  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.08085981s)
	I0316 00:17:41.133972  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133978  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.080988823s)
	I0316 00:17:41.133993  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133948  123454 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134011  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.080758975s)
	I0316 00:17:41.134031  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0316 00:17:41.134032  123454 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.01309054s)
	I0316 00:17:41.134060  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134083  123454 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0316 00:17:41.134110  123454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:41.134160  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:43.198894  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.064808781s)
	I0316 00:17:43.198926  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0316 00:17:43.198952  123454 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.198951  123454 ssh_runner.go:235] Completed: which crictl: (2.064761171s)
	I0316 00:17:43.199004  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.199051  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:43.112939  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.114446  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.613592  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.776235  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.777686  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.278307  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.110501  123454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.911421102s)
	I0316 00:17:47.110567  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0316 00:17:47.110695  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.911660704s)
	I0316 00:17:47.110728  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0316 00:17:47.110751  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:47.110703  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:47.110802  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:49.585079  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.474253503s)
	I0316 00:17:49.585109  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0316 00:17:49.585130  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.474308112s)
	I0316 00:17:49.585160  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0316 00:17:49.585134  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.585220  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.613704  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.615227  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:54.780467  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.736360  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.151102687s)
	I0316 00:17:51.736402  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0316 00:17:51.736463  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:51.736535  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:54.214591  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477993231s)
	I0316 00:17:54.214629  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0316 00:17:54.214658  123454 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:54.214728  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:55.171123  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0316 00:17:55.171204  123454 cache_images.go:123] Successfully loaded all cached images
	I0316 00:17:55.171213  123454 cache_images.go:92] duration metric: took 16.660103091s to LoadCachedImages
	I0316 00:17:55.171233  123454 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:17:55.171506  123454 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-238598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:55.171617  123454 ssh_runner.go:195] Run: crio config
	I0316 00:17:55.225056  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:17:55.225078  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:55.225089  123454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:55.225110  123454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-238598 NodeName:no-preload-238598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:17:55.225278  123454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-238598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:55.225371  123454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:17:55.237834  123454 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:55.237896  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:55.248733  123454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 00:17:55.266587  123454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:17:55.285283  123454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0316 00:17:55.303384  123454 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:55.307384  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:55.321079  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:55.453112  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:55.470573  123454 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598 for IP: 192.168.50.137
	I0316 00:17:55.470600  123454 certs.go:194] generating shared ca certs ...
	I0316 00:17:55.470623  123454 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:55.470808  123454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:55.470868  123454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:55.470906  123454 certs.go:256] generating profile certs ...
	I0316 00:17:55.471028  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.key
	I0316 00:17:55.471140  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key.0f2ae39d
	I0316 00:17:55.471195  123454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key
	I0316 00:17:55.471410  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:55.471463  123454 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:55.471483  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:55.471515  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:55.471542  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:55.471568  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:55.471612  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:55.472267  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:55.517524  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:54.115678  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:56.613196  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.277553  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:59.277770  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.567992  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:55.601463  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:55.637956  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:17:55.670063  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:55.694990  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:55.718916  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:17:55.744124  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:55.770051  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:55.794846  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:55.819060  123454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:55.836991  123454 ssh_runner.go:195] Run: openssl version
	I0316 00:17:55.844665  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:55.857643  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862493  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862561  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.868430  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:55.880551  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:55.891953  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896627  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896687  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.902539  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:55.915215  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:55.926699  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931120  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931172  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.936791  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:55.948180  123454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:55.953021  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:55.959107  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:55.965018  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:55.971159  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:55.977069  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:55.983062  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:55.989119  123454 kubeadm.go:391] StartCluster: {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:55.989201  123454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:55.989254  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.029128  123454 cri.go:89] found id: ""
	I0316 00:17:56.029209  123454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:56.040502  123454 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:56.040525  123454 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:56.040531  123454 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:56.040577  123454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:56.051843  123454 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:56.052995  123454 kubeconfig.go:125] found "no-preload-238598" server: "https://192.168.50.137:8443"
	I0316 00:17:56.055273  123454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:56.066493  123454 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0316 00:17:56.066547  123454 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:56.066564  123454 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:56.066641  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.111015  123454 cri.go:89] found id: ""
	I0316 00:17:56.111110  123454 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:56.131392  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:56.142638  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:56.142665  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:56.142725  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:56.154318  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:56.154418  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:56.166011  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:56.176688  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:56.176752  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:56.187776  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.198216  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:56.198285  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.208661  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:56.218587  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:56.218655  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:56.230247  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:56.241302  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:56.361423  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.731067  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.369591288s)
	I0316 00:17:57.731101  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.952457  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.044540  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.179796  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:58.179894  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.680635  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.180617  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.205383  123454 api_server.go:72] duration metric: took 1.025590775s to wait for apiserver process to appear ...
	I0316 00:17:59.205411  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:59.205436  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:59.205935  123454 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0316 00:17:59.706543  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:58.613340  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:00.618869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:01.914835  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.914865  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:01.914879  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:01.972138  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.972173  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:02.206540  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.219111  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.219165  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:02.705639  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.709820  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.709850  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:03.206513  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:03.216320  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:18:03.224237  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:18:03.224263  123454 api_server.go:131] duration metric: took 4.018845389s to wait for apiserver health ...
	I0316 00:18:03.224272  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:18:03.224279  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:18:03.225951  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.777309  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.777625  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.227382  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:18:03.245892  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:18:03.267423  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:18:03.281349  123454 system_pods.go:59] 8 kube-system pods found
	I0316 00:18:03.281387  123454 system_pods.go:61] "coredns-76f75df574-d2f6z" [3cd22981-0f83-4a60-9930-c103cfc2d2ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:18:03.281397  123454 system_pods.go:61] "etcd-no-preload-238598" [d98fa5b6-ad24-4c90-98c8-9e5b8f1a3250] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:18:03.281408  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [e7d7a5a0-9a4f-4df2-aaf7-44c36e5bd313] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:18:03.281420  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [a198865e-0ed5-40b6-8b10-a4fccdefa059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:18:03.281434  123454 system_pods.go:61] "kube-proxy-cjhzn" [6529873c-cb9d-42d8-991d-e450783b1707] Running
	I0316 00:18:03.281443  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [bfb373fb-ec78-4ef1-b92e-3a8af3f805a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:18:03.281457  123454 system_pods.go:61] "metrics-server-57f55c9bc5-hffvp" [4181fe7f-3e95-455b-a744-8f4dca7b870d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:18:03.281466  123454 system_pods.go:61] "storage-provisioner" [d568ae10-7b9c-4c98-8263-a09505227ac7] Running
	I0316 00:18:03.281485  123454 system_pods.go:74] duration metric: took 14.043103ms to wait for pod list to return data ...
	I0316 00:18:03.281501  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:18:03.284899  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:18:03.284923  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:18:03.284934  123454 node_conditions.go:105] duration metric: took 3.425812ms to run NodePressure ...
	I0316 00:18:03.284955  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:18:03.562930  123454 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568376  123454 kubeadm.go:733] kubelet initialised
	I0316 00:18:03.568402  123454 kubeadm.go:734] duration metric: took 5.44437ms waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568412  123454 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:18:03.574420  123454 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:03.113622  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.613724  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:07.614087  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.278238  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.776236  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.582284  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.081679  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.082343  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.113282  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.114515  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.776835  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.777258  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.778115  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.582099  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:13.082243  123454 pod_ready.go:92] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:13.082263  123454 pod_ready.go:81] duration metric: took 9.507817974s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:13.082271  123454 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:15.088733  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.613599  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:16.614876  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.280289  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.777434  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:17.089800  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.092413  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.092441  123454 pod_ready.go:81] duration metric: took 6.010161958s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.092453  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.097972  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.097996  123454 pod_ready.go:81] duration metric: took 5.533097ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.098008  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102186  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.102204  123454 pod_ready.go:81] duration metric: took 4.187939ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102213  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106692  123454 pod_ready.go:92] pod "kube-proxy-cjhzn" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.106712  123454 pod_ready.go:81] duration metric: took 4.492665ms for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106720  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111735  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.111754  123454 pod_ready.go:81] duration metric: took 5.027601ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111764  123454 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.113278  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.114061  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:22.276633  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:24.278807  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.119790  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.618664  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.115414  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.613572  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:26.778891  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:29.277585  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.619282  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.118484  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.121236  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.114043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.119153  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.614043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:31.778203  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.276424  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.618082  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.619339  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.614209  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.113521  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:36.279218  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:38.779161  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.118552  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.619543  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.614042  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.113784  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:41.278664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:43.777450  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.119118  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.119473  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.614102  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:47.112496  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:46.277664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.279095  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:46.619201  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.619302  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:49.113616  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.613449  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:18:50.777409  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:52.779497  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.278072  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.119041  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:53.121052  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:54.113699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:56.613686  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:57.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.277696  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.618835  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.118984  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.119379  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.614207  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:01.113795  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:02.281155  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.779663  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:02.618637  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.619492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:03.613777  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.114458  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:07.276601  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.277239  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.619784  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.118699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:08.613361  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.615062  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:11.277319  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.777280  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:11.119614  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.618997  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.113490  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:15.613530  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:17.613578  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.276204  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.277156  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.118717  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.618005  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:19.614161  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.112808  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:20.777843  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.778609  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.780571  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:20.618505  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.619290  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.118778  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.113901  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:26.115541  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:27.277159  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:29.277242  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:27.618996  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:30.118650  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:28.614101  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.114366  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:31.776661  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.778372  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:32.125130  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:34.619153  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.114785  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.116692  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:37.613605  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:36.276574  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.276784  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:36.619780  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.619966  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:39.614178  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.616246  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:40.779366  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.277656  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.279201  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.118560  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.120706  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:44.113022  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:46.114296  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.778494  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.277998  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.619070  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.622001  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.118739  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:48.114952  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.614794  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:52.776113  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.777687  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:52.119145  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.619675  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:53.113139  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:55.113961  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.613751  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.277412  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.277555  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.119685  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.618622  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.614914  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:02.113286  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:01.777542  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.278277  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:01.618756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.119973  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.113918  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.613434  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:06.278976  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.778022  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.124642  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.618968  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.613517  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.613699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.613997  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:11.277492  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.777429  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.619721  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.120185  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.114540  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:17.614281  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.781621  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.277078  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.277734  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.620224  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.118862  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.118920  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.117088  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.614917  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:22.779251  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.276842  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.118990  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:24.119699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.114563  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.614869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.277136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.777082  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:26.619354  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:28.619489  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.619807  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:32.117311  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:32.277582  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.778394  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.622010  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:33.119518  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:35.119736  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.613788  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.277007  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.776793  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:37.121196  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.619239  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:38.616664  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:41.112900  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:41.777952  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.276802  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:42.119128  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.119255  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:43.114941  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:45.614095  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:47.616615  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.277300  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.777275  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.119389  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.618309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.116327  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.614990  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:50.777563  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.778761  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.276863  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.619469  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:53.119593  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.116184  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:57.613355  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:57.776955  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.276381  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.619683  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:58.122772  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:59.616518  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.115379  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.613248  123537 pod_ready.go:81] duration metric: took 4m0.006848891s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:02.613273  123537 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:02.613280  123537 pod_ready.go:38] duration metric: took 4m5.267062496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:02.613297  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:02.613347  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:02.613393  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:02.670107  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:02.670139  123537 cri.go:89] found id: ""
	I0316 00:21:02.670149  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:02.670210  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.675144  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:02.675212  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:02.720695  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:02.720720  123537 cri.go:89] found id: ""
	I0316 00:21:02.720729  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:02.720790  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.725490  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:02.725570  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.276825  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.779811  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.617765  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.619210  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.619603  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.778908  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:02.778959  123537 cri.go:89] found id: ""
	I0316 00:21:02.778971  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:02.779028  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.784772  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:02.784864  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:02.830682  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:02.830709  123537 cri.go:89] found id: ""
	I0316 00:21:02.830719  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:02.830784  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.835733  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:02.835813  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:02.875862  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:02.875890  123537 cri.go:89] found id: ""
	I0316 00:21:02.875902  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:02.875967  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.880801  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:02.880857  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:02.921585  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:02.921611  123537 cri.go:89] found id: ""
	I0316 00:21:02.921622  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:02.921689  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.929521  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:02.929593  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.977621  123537 cri.go:89] found id: ""
	I0316 00:21:02.977646  123537 logs.go:276] 0 containers: []
	W0316 00:21:02.977657  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.977668  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:02.977723  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:03.020159  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.020186  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.020193  123537 cri.go:89] found id: ""
	I0316 00:21:03.020204  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:03.020274  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.025593  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.030718  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:03.030744  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:03.090141  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:03.090182  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:03.147416  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:03.147466  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:03.189686  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:03.189733  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:03.245980  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:03.246020  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.296494  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:03.296534  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:03.349602  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:03.349635  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:03.364783  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:03.364819  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:03.513917  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:03.513955  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:03.567916  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:03.567952  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:03.607620  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:03.607658  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:03.658683  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:03.658717  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.699797  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:03.699827  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:06.715440  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:06.733725  123537 api_server.go:72] duration metric: took 4m16.598062692s to wait for apiserver process to appear ...
	I0316 00:21:06.733759  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:06.733810  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:06.733868  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:06.775396  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:06.775431  123537 cri.go:89] found id: ""
	I0316 00:21:06.775442  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:06.775506  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.780448  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:06.780503  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:06.836927  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:06.836962  123537 cri.go:89] found id: ""
	I0316 00:21:06.836972  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:06.837025  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.841803  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:06.841869  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:06.887445  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:06.887470  123537 cri.go:89] found id: ""
	I0316 00:21:06.887479  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:06.887534  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.892112  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:06.892192  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:06.936614  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:06.936642  123537 cri.go:89] found id: ""
	I0316 00:21:06.936653  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:06.936717  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.943731  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:06.943799  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:06.986738  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:06.986764  123537 cri.go:89] found id: ""
	I0316 00:21:06.986774  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:06.986843  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.991555  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:06.991621  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:07.052047  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:07.052074  123537 cri.go:89] found id: ""
	I0316 00:21:07.052082  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:07.052133  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.057297  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:07.057358  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:07.104002  123537 cri.go:89] found id: ""
	I0316 00:21:07.104034  123537 logs.go:276] 0 containers: []
	W0316 00:21:07.104042  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:07.104049  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:07.104113  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:07.148540  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:07.148562  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:07.148566  123537 cri.go:89] found id: ""
	I0316 00:21:07.148572  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:07.148620  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.153502  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.157741  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:07.157770  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:07.197856  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:07.197889  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:07.654282  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:07.654324  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:07.708539  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:07.708579  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:07.725072  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:07.725104  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.277657  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.780721  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.121773  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.619756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.862465  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:07.862498  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:07.925812  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:07.925846  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:07.986121  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:07.986152  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:08.036774  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:08.036817  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:08.091902  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:08.091933  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:08.142096  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:08.142128  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:08.210747  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:08.210789  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:08.270225  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:08.270259  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:10.817112  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:21:10.822359  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:21:10.823955  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:10.823978  123537 api_server.go:131] duration metric: took 4.090210216s to wait for apiserver health ...
	I0316 00:21:10.823988  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:10.824019  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:10.824076  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:10.872487  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:10.872514  123537 cri.go:89] found id: ""
	I0316 00:21:10.872524  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:10.872590  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.877131  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:10.877197  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:10.916699  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:10.916728  123537 cri.go:89] found id: ""
	I0316 00:21:10.916737  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:10.916797  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.921114  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:10.921182  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:10.964099  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:10.964123  123537 cri.go:89] found id: ""
	I0316 00:21:10.964132  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:10.964191  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.968716  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:10.968788  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.008883  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.008909  123537 cri.go:89] found id: ""
	I0316 00:21:11.008919  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:11.008974  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.014068  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.014138  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.067209  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.067239  123537 cri.go:89] found id: ""
	I0316 00:21:11.067251  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:11.067315  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.072536  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.072663  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.119366  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.119399  123537 cri.go:89] found id: ""
	I0316 00:21:11.119411  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:11.119462  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.124502  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.124590  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.169458  123537 cri.go:89] found id: ""
	I0316 00:21:11.169494  123537 logs.go:276] 0 containers: []
	W0316 00:21:11.169505  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.169513  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:11.169576  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:11.218886  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:11.218923  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:11.218928  123537 cri.go:89] found id: ""
	I0316 00:21:11.218938  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:11.219002  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.223583  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.228729  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:11.228753  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:11.282781  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:11.282818  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:11.347330  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:11.347379  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.401191  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:11.401225  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.453126  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:11.453158  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.523058  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.523110  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.944108  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.944157  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:12.001558  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:12.001602  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:12.062833  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:12.062885  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:12.078726  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:12.078762  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:12.209248  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:12.209284  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:12.251891  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:12.251930  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:12.296240  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:12.296271  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:14.846244  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:14.846274  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.846279  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.846283  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.846287  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.846290  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.846294  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.846299  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.846302  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.846309  123537 system_pods.go:74] duration metric: took 4.022315588s to wait for pod list to return data ...
	I0316 00:21:14.846317  123537 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:14.848830  123537 default_sa.go:45] found service account: "default"
	I0316 00:21:14.848852  123537 default_sa.go:55] duration metric: took 2.529805ms for default service account to be created ...
	I0316 00:21:14.848859  123537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:14.861369  123537 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:14.861396  123537 system_pods.go:89] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.861401  123537 system_pods.go:89] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.861405  123537 system_pods.go:89] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.861409  123537 system_pods.go:89] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.861448  123537 system_pods.go:89] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.861456  123537 system_pods.go:89] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.861465  123537 system_pods.go:89] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.861470  123537 system_pods.go:89] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.861478  123537 system_pods.go:126] duration metric: took 12.614437ms to wait for k8s-apps to be running ...
	I0316 00:21:14.861488  123537 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:14.861534  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:14.879439  123537 system_svc.go:56] duration metric: took 17.934537ms WaitForService to wait for kubelet
	I0316 00:21:14.879484  123537 kubeadm.go:576] duration metric: took 4m24.743827748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:14.879523  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:14.882642  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:14.882673  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:14.882716  123537 node_conditions.go:105] duration metric: took 3.184841ms to run NodePressure ...
	I0316 00:21:14.882733  123537 start.go:240] waiting for startup goroutines ...
	I0316 00:21:14.882749  123537 start.go:245] waiting for cluster config update ...
	I0316 00:21:14.882789  123537 start.go:254] writing updated cluster config ...
	I0316 00:21:14.883119  123537 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:14.937804  123537 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:14.939886  123537 out.go:177] * Done! kubectl is now configured to use "embed-certs-666637" cluster and "default" namespace by default
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:12.278383  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.279769  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:12.124356  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.619164  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:16.777597  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.277188  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.119492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.119935  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.278136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:22.779721  123819 pod_ready.go:81] duration metric: took 4m0.010022344s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:22.779752  123819 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:22.779762  123819 pod_ready.go:38] duration metric: took 4m5.913207723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:22.779779  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:22.779814  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:22.779876  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:22.836022  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:22.836058  123819 cri.go:89] found id: ""
	I0316 00:21:22.836069  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:22.836131  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.841289  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:22.841362  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:22.883980  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:22.884007  123819 cri.go:89] found id: ""
	I0316 00:21:22.884018  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:22.884084  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.889352  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:22.889427  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:22.929947  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:22.929977  123819 cri.go:89] found id: ""
	I0316 00:21:22.929987  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:22.930033  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.935400  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:22.935485  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:22.975548  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:22.975580  123819 cri.go:89] found id: ""
	I0316 00:21:22.975598  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:22.975671  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.981916  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:22.981998  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.019925  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.019965  123819 cri.go:89] found id: ""
	I0316 00:21:23.019977  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:23.020046  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.024870  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.024960  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.068210  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.068241  123819 cri.go:89] found id: ""
	I0316 00:21:23.068253  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:23.068344  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.073492  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.073578  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.113267  123819 cri.go:89] found id: ""
	I0316 00:21:23.113301  123819 logs.go:276] 0 containers: []
	W0316 00:21:23.113311  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.113319  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:23.113382  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:23.160155  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:23.160175  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.160179  123819 cri.go:89] found id: ""
	I0316 00:21:23.160192  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:23.160241  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.165125  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.169508  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:23.169530  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.218749  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:23.218786  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.274140  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:23.274177  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.320515  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:23.320559  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:23.835119  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:23.835173  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:23.907635  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.907691  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.925071  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:23.925126  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:23.991996  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:23.992028  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:24.032865  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.032899  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.090947  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:24.090987  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:24.285862  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:24.285896  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:24.337983  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:24.338027  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:24.379626  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:24.379657  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:21.618894  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:24.122648  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:26.918844  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.938014  123819 api_server.go:72] duration metric: took 4m17.276244s to wait for apiserver process to appear ...
	I0316 00:21:26.938053  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:26.938095  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:26.938157  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:26.983515  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:26.983538  123819 cri.go:89] found id: ""
	I0316 00:21:26.983546  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:26.983595  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:26.989278  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:26.989341  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:27.039968  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.040000  123819 cri.go:89] found id: ""
	I0316 00:21:27.040009  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:27.040078  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.045617  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:27.045687  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:27.085920  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.085948  123819 cri.go:89] found id: ""
	I0316 00:21:27.085960  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:27.086029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.090911  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:27.090989  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:27.137289  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:27.137322  123819 cri.go:89] found id: ""
	I0316 00:21:27.137333  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:27.137393  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.141956  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:27.142031  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:27.180823  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.180845  123819 cri.go:89] found id: ""
	I0316 00:21:27.180854  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:27.180919  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.185439  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:27.185523  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:27.225775  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:27.225797  123819 cri.go:89] found id: ""
	I0316 00:21:27.225805  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:27.225854  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.230648  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:27.230717  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:27.269429  123819 cri.go:89] found id: ""
	I0316 00:21:27.269465  123819 logs.go:276] 0 containers: []
	W0316 00:21:27.269477  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:27.269485  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:27.269550  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:27.308288  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.308316  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.308321  123819 cri.go:89] found id: ""
	I0316 00:21:27.308329  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:27.308378  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.312944  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.317794  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:27.317829  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:27.364287  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:27.364323  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.419482  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:27.419521  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.468553  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:27.468585  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.513287  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:27.513320  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.561382  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:27.561426  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.601292  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:27.601325  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:27.656848  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:27.656902  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:27.796212  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:27.796245  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:28.246569  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:28.246611  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:28.302971  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:28.303015  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:28.359613  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:28.359645  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:28.375844  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:28.375877  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:26.124217  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:28.619599  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:30.921320  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:21:30.926064  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:21:30.927332  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:30.927353  123819 api_server.go:131] duration metric: took 3.989292523s to wait for apiserver health ...
	I0316 00:21:30.927361  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:30.927386  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:30.927438  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:30.975348  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:30.975376  123819 cri.go:89] found id: ""
	I0316 00:21:30.975389  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:30.975459  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:30.980128  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:30.980194  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:31.029534  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.029563  123819 cri.go:89] found id: ""
	I0316 00:21:31.029574  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:31.029627  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.034066  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:31.034149  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:31.073857  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.073884  123819 cri.go:89] found id: ""
	I0316 00:21:31.073892  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:31.073961  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.078421  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:31.078501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:31.117922  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.117951  123819 cri.go:89] found id: ""
	I0316 00:21:31.117964  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:31.118029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.122435  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:31.122501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:31.161059  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.161089  123819 cri.go:89] found id: ""
	I0316 00:21:31.161101  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:31.161155  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.165503  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:31.165572  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:31.207637  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.207669  123819 cri.go:89] found id: ""
	I0316 00:21:31.207679  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:31.207742  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.212296  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:31.212360  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:31.251480  123819 cri.go:89] found id: ""
	I0316 00:21:31.251519  123819 logs.go:276] 0 containers: []
	W0316 00:21:31.251530  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:31.251539  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:31.251608  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:31.296321  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.296345  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.296350  123819 cri.go:89] found id: ""
	I0316 00:21:31.296357  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:31.296414  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.302159  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.306501  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:31.306526  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.348347  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:31.348379  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.388542  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:31.388573  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:31.439926  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:31.439962  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:31.499674  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:31.499711  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:31.552720  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:31.552771  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.605281  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:31.605331  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.651964  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:31.651997  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.696113  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:31.696150  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.749712  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:31.749751  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.801476  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:31.801508  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:32.236105  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:32.236146  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:32.253815  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:32.253848  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:34.930730  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:34.930759  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.930763  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.930767  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.930772  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.930775  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.930778  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.930783  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.930788  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.930798  123819 system_pods.go:74] duration metric: took 4.003426137s to wait for pod list to return data ...
	I0316 00:21:34.930807  123819 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:34.933462  123819 default_sa.go:45] found service account: "default"
	I0316 00:21:34.933492  123819 default_sa.go:55] duration metric: took 2.674728ms for default service account to be created ...
	I0316 00:21:34.933500  123819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:34.939351  123819 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:34.939382  123819 system_pods.go:89] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.939393  123819 system_pods.go:89] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.939400  123819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.939406  123819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.939414  123819 system_pods.go:89] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.939420  123819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.939442  123819 system_pods.go:89] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.939454  123819 system_pods.go:89] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.939469  123819 system_pods.go:126] duration metric: took 5.962328ms to wait for k8s-apps to be running ...
	I0316 00:21:34.939482  123819 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:34.939539  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:34.958068  123819 system_svc.go:56] duration metric: took 18.572929ms WaitForService to wait for kubelet
	I0316 00:21:34.958108  123819 kubeadm.go:576] duration metric: took 4m25.296341727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:34.958130  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:34.962603  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:34.962629  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:34.962641  123819 node_conditions.go:105] duration metric: took 4.505615ms to run NodePressure ...
	I0316 00:21:34.962657  123819 start.go:240] waiting for startup goroutines ...
	I0316 00:21:34.962667  123819 start.go:245] waiting for cluster config update ...
	I0316 00:21:34.962690  123819 start.go:254] writing updated cluster config ...
	I0316 00:21:34.963009  123819 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:35.015774  123819 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:35.019103  123819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-313436" cluster and "default" namespace by default
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:21:31.121456  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:33.122437  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:35.618906  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:37.619223  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:40.120743  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:42.619309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:44.619544  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:47.120179  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:49.619419  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:52.124510  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:54.125147  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:56.621651  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:59.120895  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:01.618287  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:03.620297  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:06.119870  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:08.122618  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:10.619464  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.121381  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:15.619590  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.122483  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:19.112568  123454 pod_ready.go:81] duration metric: took 4m0.000767313s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	E0316 00:22:19.112600  123454 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0316 00:22:19.112621  123454 pod_ready.go:38] duration metric: took 4m15.544198169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:22:19.112652  123454 kubeadm.go:591] duration metric: took 4m23.072115667s to restartPrimaryControlPlane
	W0316 00:22:19.112713  123454 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:22:19.112769  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:51.249327  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.136527598s)
	I0316 00:22:51.249406  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:22:51.268404  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:22:51.280832  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:22:51.292639  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:22:51.292661  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:22:51.292712  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:22:51.303272  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:22:51.303347  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:22:51.313854  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:22:51.324290  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:22:51.324361  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:22:51.334879  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.345302  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:22:51.345382  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.355682  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:22:51.366601  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:22:51.366660  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:22:51.377336  123454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:22:51.594624  123454 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:00.473055  123454 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0316 00:23:00.473140  123454 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:00.473255  123454 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:00.473415  123454 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:00.473551  123454 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:00.473682  123454 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:00.475591  123454 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:00.475704  123454 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:00.475803  123454 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:00.475905  123454 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:00.476001  123454 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:00.476100  123454 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:00.476190  123454 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:00.476281  123454 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:00.476378  123454 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:00.476516  123454 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:00.476647  123454 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:00.476715  123454 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:00.476801  123454 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:00.476879  123454 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:00.476968  123454 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0316 00:23:00.477042  123454 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:00.477166  123454 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:00.477253  123454 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:00.477378  123454 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:00.477480  123454 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:00.479084  123454 out.go:204]   - Booting up control plane ...
	I0316 00:23:00.479206  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:00.479332  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:00.479440  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:00.479541  123454 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:00.479625  123454 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:00.479697  123454 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:00.479874  123454 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:23:00.479994  123454 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003092 seconds
	I0316 00:23:00.480139  123454 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:23:00.480339  123454 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:23:00.480445  123454 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:23:00.480687  123454 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-238598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:23:00.480789  123454 kubeadm.go:309] [bootstrap-token] Using token: aspuu8.i4yhgkjx7e43mgmn
	I0316 00:23:00.482437  123454 out.go:204]   - Configuring RBAC rules ...
	I0316 00:23:00.482568  123454 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:23:00.482697  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:23:00.482917  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:23:00.483119  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:23:00.483283  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:23:00.483406  123454 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:23:00.483582  123454 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:23:00.483653  123454 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:23:00.483714  123454 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:23:00.483720  123454 kubeadm.go:309] 
	I0316 00:23:00.483815  123454 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:23:00.483833  123454 kubeadm.go:309] 
	I0316 00:23:00.483973  123454 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:23:00.483986  123454 kubeadm.go:309] 
	I0316 00:23:00.484014  123454 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:23:00.484119  123454 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:23:00.484200  123454 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:23:00.484211  123454 kubeadm.go:309] 
	I0316 00:23:00.484283  123454 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:23:00.484288  123454 kubeadm.go:309] 
	I0316 00:23:00.484360  123454 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:23:00.484366  123454 kubeadm.go:309] 
	I0316 00:23:00.484452  123454 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:23:00.484560  123454 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:23:00.484657  123454 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:23:00.484666  123454 kubeadm.go:309] 
	I0316 00:23:00.484798  123454 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:23:00.484920  123454 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:23:00.484932  123454 kubeadm.go:309] 
	I0316 00:23:00.485053  123454 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485196  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:23:00.485227  123454 kubeadm.go:309] 	--control-plane 
	I0316 00:23:00.485241  123454 kubeadm.go:309] 
	I0316 00:23:00.485357  123454 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:23:00.485367  123454 kubeadm.go:309] 
	I0316 00:23:00.485488  123454 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485646  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:23:00.485661  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:23:00.485671  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:23:00.487417  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:23:00.489063  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:23:00.526147  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:23:00.571796  123454 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-238598 minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=no-preload-238598 minikube.k8s.io/primary=true
	I0316 00:23:00.892908  123454 ops.go:34] apiserver oom_adj: -16
	I0316 00:23:00.892994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.394077  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.893097  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.393114  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.893994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.393930  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.893428  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.393822  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.893810  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.393999  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.893998  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.393104  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.893725  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.393873  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.893432  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.394054  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.893595  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.393109  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.893621  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.393322  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.894024  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.393711  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.893465  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.393059  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.497890  123454 kubeadm.go:1107] duration metric: took 11.926069028s to wait for elevateKubeSystemPrivileges
	W0316 00:23:12.497951  123454 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:23:12.497962  123454 kubeadm.go:393] duration metric: took 5m16.508852945s to StartCluster
	I0316 00:23:12.497988  123454 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.498139  123454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:23:12.500632  123454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.500995  123454 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:23:12.502850  123454 out.go:177] * Verifying Kubernetes components...
	I0316 00:23:12.501089  123454 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:23:12.501233  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:23:12.504432  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:23:12.504443  123454 addons.go:69] Setting storage-provisioner=true in profile "no-preload-238598"
	I0316 00:23:12.504491  123454 addons.go:234] Setting addon storage-provisioner=true in "no-preload-238598"
	I0316 00:23:12.504502  123454 addons.go:69] Setting default-storageclass=true in profile "no-preload-238598"
	I0316 00:23:12.504515  123454 addons.go:69] Setting metrics-server=true in profile "no-preload-238598"
	I0316 00:23:12.504526  123454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-238598"
	I0316 00:23:12.504541  123454 addons.go:234] Setting addon metrics-server=true in "no-preload-238598"
	W0316 00:23:12.504551  123454 addons.go:243] addon metrics-server should already be in state true
	I0316 00:23:12.504582  123454 host.go:66] Checking if "no-preload-238598" exists ...
	W0316 00:23:12.504505  123454 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:23:12.504656  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.504996  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505012  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.505013  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505229  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.521634  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0316 00:23:12.521698  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0316 00:23:12.522283  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522377  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522836  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.522861  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.522990  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.523032  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.523203  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523375  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523737  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.523758  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524232  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.524277  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524695  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0316 00:23:12.525112  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.525610  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.525637  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.526025  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.526218  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.530010  123454 addons.go:234] Setting addon default-storageclass=true in "no-preload-238598"
	W0316 00:23:12.530029  123454 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:23:12.530053  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.530277  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.530315  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.540310  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0316 00:23:12.545850  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.545966  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0316 00:23:12.546335  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.546740  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.546761  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.547035  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.547232  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.548605  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.548626  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.549001  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.549058  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0316 00:23:12.549268  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.549323  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.549454  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.551419  123454 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:23:12.549975  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.551115  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.553027  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:23:12.553050  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:23:12.553074  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.553082  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.554948  123454 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:23:12.553404  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.556096  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556544  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.556568  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556640  123454 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.556660  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:23:12.556679  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.556769  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.557150  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.557176  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.557398  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.557600  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.557886  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.560220  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560555  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.560582  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560759  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.560982  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.561157  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.561318  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.574877  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0316 00:23:12.575802  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.576313  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.576337  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.576640  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.577015  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.578483  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.578814  123454 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.578835  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:23:12.578856  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.581832  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582439  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.582454  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.582465  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582635  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.582819  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.582969  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.729051  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:23:12.747162  123454 node_ready.go:35] waiting up to 6m0s for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.759957  123454 node_ready.go:49] node "no-preload-238598" has status "Ready":"True"
	I0316 00:23:12.759992  123454 node_ready.go:38] duration metric: took 12.79378ms for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.760006  123454 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.772201  123454 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795626  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.795660  123454 pod_ready.go:81] duration metric: took 23.429082ms for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795674  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808661  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.808688  123454 pod_ready.go:81] duration metric: took 13.006568ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808699  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821578  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.821613  123454 pod_ready.go:81] duration metric: took 12.904651ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821627  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.832585  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:23:12.832616  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:23:12.838375  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.838404  123454 pod_ready.go:81] duration metric: took 16.768452ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.838415  123454 pod_ready.go:38] duration metric: took 78.396172ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.838435  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:23:12.838522  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:23:12.889063  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.907225  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.924533  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:23:12.924565  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:23:12.947224  123454 api_server.go:72] duration metric: took 446.183679ms to wait for apiserver process to appear ...
	I0316 00:23:12.947257  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:23:12.947281  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:23:12.975463  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:12.975495  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:23:13.023702  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:23:13.039598  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:23:13.039638  123454 api_server.go:131] duration metric: took 92.372403ms to wait for apiserver health ...
	I0316 00:23:13.039649  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:23:13.069937  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:13.141358  123454 system_pods.go:59] 5 kube-system pods found
	I0316 00:23:13.141387  123454 system_pods.go:61] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.141391  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.141397  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.141400  123454 system_pods.go:61] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending
	I0316 00:23:13.141404  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.141411  123454 system_pods.go:74] duration metric: took 101.754765ms to wait for pod list to return data ...
	I0316 00:23:13.141419  123454 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:23:13.200153  123454 default_sa.go:45] found service account: "default"
	I0316 00:23:13.200193  123454 default_sa.go:55] duration metric: took 58.765381ms for default service account to be created ...
	I0316 00:23:13.200205  123454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:23:13.381398  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381431  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.381771  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.381825  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.381840  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.381849  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381862  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.382154  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.382159  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.382189  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.383303  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.383345  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.383353  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending
	I0316 00:23:13.383360  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.383368  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.383374  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.383384  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.383396  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.383440  123454 retry.go:31] will retry after 221.286986ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.408809  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.408839  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.409146  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.409191  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.409195  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.612171  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.612205  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612212  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612221  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.612226  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.612230  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.612236  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.612239  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.612260  123454 retry.go:31] will retry after 311.442515ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.934136  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.934170  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934177  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934185  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.934191  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.934197  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.934204  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.934210  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.934234  123454 retry.go:31] will retry after 453.147474ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.343055  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.435784176s)
	I0316 00:23:14.343123  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343139  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343497  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343523  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.343540  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343554  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343800  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.343876  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343895  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.404681  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.404725  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404738  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404748  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.404758  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.404767  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.404777  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.404790  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.404810  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.404821  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending
	I0316 00:23:14.404846  123454 retry.go:31] will retry after 464.575803ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.447649  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.377663696s)
	I0316 00:23:14.447706  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.447724  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448062  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448083  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448092  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.448100  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448367  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.448367  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448394  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448407  123454 addons.go:470] Verifying addon metrics-server=true in "no-preload-238598"
	I0316 00:23:14.450675  123454 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0316 00:23:14.452378  123454 addons.go:505] duration metric: took 1.951301533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0316 00:23:14.888167  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.888206  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:14.888219  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.888226  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.888236  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.888243  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.888252  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.888260  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.888292  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.888301  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:14.888325  123454 retry.go:31] will retry after 490.515879ms: missing components: kube-proxy
	I0316 00:23:15.389667  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:15.389694  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:15.389700  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Running
	I0316 00:23:15.389704  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:15.389708  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:15.389712  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:15.389716  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Running
	I0316 00:23:15.389721  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:15.389728  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:15.389735  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:15.389745  123454 system_pods.go:126] duration metric: took 2.189532563s to wait for k8s-apps to be running ...
	I0316 00:23:15.389757  123454 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:23:15.389805  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:15.409241  123454 system_svc.go:56] duration metric: took 19.469575ms WaitForService to wait for kubelet
	I0316 00:23:15.409273  123454 kubeadm.go:576] duration metric: took 2.908240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:23:15.409292  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:23:15.412530  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:23:15.412559  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:23:15.412570  123454 node_conditions.go:105] duration metric: took 3.272979ms to run NodePressure ...
	I0316 00:23:15.412585  123454 start.go:240] waiting for startup goroutines ...
	I0316 00:23:15.412594  123454 start.go:245] waiting for cluster config update ...
	I0316 00:23:15.412608  123454 start.go:254] writing updated cluster config ...
	I0316 00:23:15.412923  123454 ssh_runner.go:195] Run: rm -f paused
	I0316 00:23:15.468245  123454 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 00:23:15.470311  123454 out.go:177] * Done! kubectl is now configured to use "no-preload-238598" cluster and "default" namespace by default
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 
	
	
	==> CRI-O <==
	Mar 16 00:25:27 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:27.965233341Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710548727965202829,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b173897c-c108-43dd-9f4f-f8460cd3a10c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:27 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:27.965853931Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=14144e4c-6b35-4f21-975e-ec8036df71f0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:27 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:27.965923147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=14144e4c-6b35-4f21-975e-ec8036df71f0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:27 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:27.965959628Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=14144e4c-6b35-4f21-975e-ec8036df71f0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.001255556Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8aa4c4c6-ce1c-4da4-a031-8849fc410eff name=/runtime.v1.RuntimeService/Version
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.001321717Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8aa4c4c6-ce1c-4da4-a031-8849fc410eff name=/runtime.v1.RuntimeService/Version
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.002971460Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b36c09b3-92f7-492e-9291-32e2df969f56 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.003397732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710548728003321622,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b36c09b3-92f7-492e-9291-32e2df969f56 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.004027137Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=309aa567-e309-452b-9a87-f4b44cbf940d name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.004093345Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=309aa567-e309-452b-9a87-f4b44cbf940d name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.004166769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=309aa567-e309-452b-9a87-f4b44cbf940d name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.040576019Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d4a7082-6e11-40be-adcd-7f19a5139313 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.040643206Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d4a7082-6e11-40be-adcd-7f19a5139313 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.041963014Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=eb1e4986-3ec8-4d96-a32c-8f8a27779a44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.042418977Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710548728042312248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=eb1e4986-3ec8-4d96-a32c-8f8a27779a44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.042909761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=91b13315-c839-46f7-9d37-65d1ec7ba85d name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.042954481Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=91b13315-c839-46f7-9d37-65d1ec7ba85d name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.042984201Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=91b13315-c839-46f7-9d37-65d1ec7ba85d name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.084999960Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=529da80a-7dee-4390-a4dc-d7bca83af683 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.085072868Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=529da80a-7dee-4390-a4dc-d7bca83af683 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.086778140Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=934b90d7-d3c1-4b97-832d-909278e0a613 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.087123853Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710548728087105478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=934b90d7-d3c1-4b97-832d-909278e0a613 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.087783613Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2562363c-94db-456b-86bb-6d2998545a2e name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.087830314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2562363c-94db-456b-86bb-6d2998545a2e name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:25:28 old-k8s-version-402923 crio[648]: time="2024-03-16 00:25:28.087862325Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=2562363c-94db-456b-86bb-6d2998545a2e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.061034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045188] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar16 00:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.786648] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.691488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.819996] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.063026] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069540] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.190023] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.172778] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.261353] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.077973] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.071596] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.890538] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +12.663086] kauditd_printk_skb: 46 callbacks suppressed
	[Mar16 00:21] systemd-fstab-generator[5049]: Ignoring "noauto" option for root device
	[Mar16 00:23] systemd-fstab-generator[5330]: Ignoring "noauto" option for root device
	[  +0.068986] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:25:28 up 8 min,  0 users,  load average: 0.03, 0.10, 0.06
	Linux old-k8s-version-402923 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bf4880, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b30870, 0x24, 0x60, 0x7f108d2c2850, 0x118, ...)
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: net/http.(*Transport).dial(0xc0008bca00, 0x4f7fe00, 0xc000120018, 0x48ab5d6, 0x3, 0xc000b30870, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: net/http.(*Transport).dialConn(0xc0008bca00, 0x4f7fe00, 0xc000120018, 0x0, 0xc00055c300, 0x5, 0xc000b30870, 0x24, 0x0, 0xc000b24360, ...)
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: net/http.(*Transport).dialConnFor(0xc0008bca00, 0xc000ca40b0)
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: created by net/http.(*Transport).queueForDial
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: goroutine 169 [select]:
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: net.(*netFD).connect.func2(0x4f7fe40, 0xc0005ef140, 0xc00048d680, 0xc000ca64e0, 0xc000ca6480)
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /usr/local/go/src/net/fd_unix.go:118 +0xc5
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]: created by net.(*netFD).connect
	Mar 16 00:25:25 old-k8s-version-402923 kubelet[5512]:         /usr/local/go/src/net/fd_unix.go:117 +0x234
	Mar 16 00:25:25 old-k8s-version-402923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 16 00:25:25 old-k8s-version-402923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 16 00:25:26 old-k8s-version-402923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 16 00:25:26 old-k8s-version-402923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 16 00:25:26 old-k8s-version-402923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 16 00:25:26 old-k8s-version-402923 kubelet[5578]: I0316 00:25:26.536288    5578 server.go:416] Version: v1.20.0
	Mar 16 00:25:26 old-k8s-version-402923 kubelet[5578]: I0316 00:25:26.536787    5578 server.go:837] Client rotation is on, will bootstrap in background
	Mar 16 00:25:26 old-k8s-version-402923 kubelet[5578]: I0316 00:25:26.539805    5578 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 16 00:25:26 old-k8s-version-402923 kubelet[5578]: W0316 00:25:26.541483    5578 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 16 00:25:26 old-k8s-version-402923 kubelet[5578]: I0316 00:25:26.541540    5578 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (248.252381ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-402923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (744.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-666637 -n embed-certs-666637
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-16 00:30:15.50650056 +0000 UTC m=+5645.929351828
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-666637 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-666637 logs -n 25: (2.084698462s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-313368 ssh                                | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:13:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:00.891560  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:13:06.971548  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:10.043616  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:16.123615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:19.195641  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:25.275569  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:28.347627  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:34.427628  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:37.499621  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:43.579636  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:46.651611  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:52.731602  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:55.803555  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:01.883545  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:04.955579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:11.035610  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:14.107615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:20.187606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:23.259572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:29.339575  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:32.411617  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:38.491587  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:41.563659  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:47.643582  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:50.715565  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:56.795596  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:59.867614  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:05.947572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:09.019585  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:15.099606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:18.171563  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:24.251589  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:27.323592  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:33.403599  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:36.475652  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:42.555600  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:45.627577  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:51.707630  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:54.779625  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:00.859579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:03.931626  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:10.011762  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:13.083615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:16.087122  123537 start.go:364] duration metric: took 4m28.254030119s to acquireMachinesLock for "embed-certs-666637"
	I0316 00:16:16.087211  123537 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:16.087224  123537 fix.go:54] fixHost starting: 
	I0316 00:16:16.087613  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:16.087653  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:16.102371  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0316 00:16:16.102813  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:16.103305  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:16.103343  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:16.103693  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:16.103874  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:16.104010  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:16.105752  123537 fix.go:112] recreateIfNeeded on embed-certs-666637: state=Stopped err=<nil>
	I0316 00:16:16.105780  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	W0316 00:16:16.105959  123537 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:16.107881  123537 out.go:177] * Restarting existing kvm2 VM for "embed-certs-666637" ...
	I0316 00:16:16.109056  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Start
	I0316 00:16:16.109231  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring networks are active...
	I0316 00:16:16.110036  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network default is active
	I0316 00:16:16.110372  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network mk-embed-certs-666637 is active
	I0316 00:16:16.110782  123537 main.go:141] libmachine: (embed-certs-666637) Getting domain xml...
	I0316 00:16:16.111608  123537 main.go:141] libmachine: (embed-certs-666637) Creating domain...
	I0316 00:16:17.296901  123537 main.go:141] libmachine: (embed-certs-666637) Waiting to get IP...
	I0316 00:16:17.297746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.298129  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.298317  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.298111  124543 retry.go:31] will retry after 269.98852ms: waiting for machine to come up
	I0316 00:16:17.569866  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.570322  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.570349  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.570278  124543 retry.go:31] will retry after 244.711835ms: waiting for machine to come up
	I0316 00:16:16.084301  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:16.084359  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084699  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:16:16.084726  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084970  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:16:16.086868  123454 machine.go:97] duration metric: took 4m35.39093995s to provisionDockerMachine
	I0316 00:16:16.087007  123454 fix.go:56] duration metric: took 4m35.413006758s for fixHost
	I0316 00:16:16.087038  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 4m35.413320023s
	W0316 00:16:16.087068  123454 start.go:713] error starting host: provision: host is not running
	W0316 00:16:16.087236  123454 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0316 00:16:16.087249  123454 start.go:728] Will try again in 5 seconds ...
	I0316 00:16:17.816747  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.817165  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.817196  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.817109  124543 retry.go:31] will retry after 326.155242ms: waiting for machine to come up
	I0316 00:16:18.144611  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.145047  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.145081  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.145000  124543 retry.go:31] will retry after 464.805158ms: waiting for machine to come up
	I0316 00:16:18.611746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.612105  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.612140  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.612039  124543 retry.go:31] will retry after 593.718495ms: waiting for machine to come up
	I0316 00:16:19.208024  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.208444  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.208476  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.208379  124543 retry.go:31] will retry after 772.07702ms: waiting for machine to come up
	I0316 00:16:19.982326  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.982800  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.982827  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.982706  124543 retry.go:31] will retry after 846.887476ms: waiting for machine to come up
	I0316 00:16:20.830726  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:20.831144  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:20.831168  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:20.831098  124543 retry.go:31] will retry after 1.274824907s: waiting for machine to come up
	I0316 00:16:22.107855  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:22.108252  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:22.108278  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:22.108209  124543 retry.go:31] will retry after 1.41217789s: waiting for machine to come up
	I0316 00:16:21.088013  123454 start.go:360] acquireMachinesLock for no-preload-238598: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:23.522725  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:23.523143  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:23.523179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:23.523094  124543 retry.go:31] will retry after 1.567285216s: waiting for machine to come up
	I0316 00:16:25.092539  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:25.092954  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:25.092981  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:25.092941  124543 retry.go:31] will retry after 2.260428679s: waiting for machine to come up
	I0316 00:16:27.354650  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:27.355051  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:27.355082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:27.354990  124543 retry.go:31] will retry after 2.402464465s: waiting for machine to come up
	I0316 00:16:29.758774  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:29.759220  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:29.759253  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:29.759176  124543 retry.go:31] will retry after 3.63505234s: waiting for machine to come up
	I0316 00:16:34.648552  123819 start.go:364] duration metric: took 4m4.062008179s to acquireMachinesLock for "default-k8s-diff-port-313436"
	I0316 00:16:34.648628  123819 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:34.648638  123819 fix.go:54] fixHost starting: 
	I0316 00:16:34.649089  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:34.649134  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:34.667801  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0316 00:16:34.668234  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:34.668737  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:16:34.668768  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:34.669123  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:34.669349  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:34.669552  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:16:34.671100  123819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313436: state=Stopped err=<nil>
	I0316 00:16:34.671139  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	W0316 00:16:34.671297  123819 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:34.673738  123819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-313436" ...
	I0316 00:16:34.675120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Start
	I0316 00:16:34.675292  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring networks are active...
	I0316 00:16:34.676038  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network default is active
	I0316 00:16:34.676427  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network mk-default-k8s-diff-port-313436 is active
	I0316 00:16:34.676855  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Getting domain xml...
	I0316 00:16:34.677501  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Creating domain...
	I0316 00:16:33.397686  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398274  123537 main.go:141] libmachine: (embed-certs-666637) Found IP for machine: 192.168.61.91
	I0316 00:16:33.398301  123537 main.go:141] libmachine: (embed-certs-666637) Reserving static IP address...
	I0316 00:16:33.398319  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has current primary IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398829  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.398859  123537 main.go:141] libmachine: (embed-certs-666637) DBG | skip adding static IP to network mk-embed-certs-666637 - found existing host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"}
	I0316 00:16:33.398883  123537 main.go:141] libmachine: (embed-certs-666637) Reserved static IP address: 192.168.61.91
	I0316 00:16:33.398896  123537 main.go:141] libmachine: (embed-certs-666637) Waiting for SSH to be available...
	I0316 00:16:33.398905  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Getting to WaitForSSH function...
	I0316 00:16:33.401376  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.401835  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.401872  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.402054  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH client type: external
	I0316 00:16:33.402082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa (-rw-------)
	I0316 00:16:33.402113  123537 main.go:141] libmachine: (embed-certs-666637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:33.402141  123537 main.go:141] libmachine: (embed-certs-666637) DBG | About to run SSH command:
	I0316 00:16:33.402188  123537 main.go:141] libmachine: (embed-certs-666637) DBG | exit 0
	I0316 00:16:33.523353  123537 main.go:141] libmachine: (embed-certs-666637) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:33.523747  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetConfigRaw
	I0316 00:16:33.524393  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.526639  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527046  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.527080  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527278  123537 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:16:33.527509  123537 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:33.527527  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:33.527766  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.529906  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.530210  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530341  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.530596  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530816  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530953  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.531119  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.531334  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.531348  123537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:33.635573  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:33.635601  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.635879  123537 buildroot.go:166] provisioning hostname "embed-certs-666637"
	I0316 00:16:33.635905  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.636109  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.638998  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639369  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.639417  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639629  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.639795  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.639971  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.640103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.640366  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.640524  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.640543  123537 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-666637 && echo "embed-certs-666637" | sudo tee /etc/hostname
	I0316 00:16:33.757019  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-666637
	
	I0316 00:16:33.757049  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.759808  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760120  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.760154  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760375  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.760583  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760723  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760829  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.760951  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.761121  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.761144  123537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-666637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-666637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-666637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:33.873548  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:33.873587  123537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:33.873642  123537 buildroot.go:174] setting up certificates
	I0316 00:16:33.873654  123537 provision.go:84] configureAuth start
	I0316 00:16:33.873666  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.873986  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.876609  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.876976  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.877004  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.877194  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.879624  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880156  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.880185  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880300  123537 provision.go:143] copyHostCerts
	I0316 00:16:33.880359  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:33.880370  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:33.880441  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:33.880526  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:33.880534  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:33.880558  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:33.880625  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:33.880632  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:33.880653  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:33.880707  123537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.embed-certs-666637 san=[127.0.0.1 192.168.61.91 embed-certs-666637 localhost minikube]
	I0316 00:16:33.984403  123537 provision.go:177] copyRemoteCerts
	I0316 00:16:33.984471  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:33.984499  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.987297  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987711  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.987741  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987894  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.988108  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.988284  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.988456  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.069540  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:34.094494  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 00:16:34.119198  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:34.144669  123537 provision.go:87] duration metric: took 271.000471ms to configureAuth
	I0316 00:16:34.144701  123537 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:34.144891  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:34.144989  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.148055  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148464  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.148496  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148710  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.148918  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149097  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149251  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.149416  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.149580  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.149596  123537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:34.414026  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:34.414058  123537 machine.go:97] duration metric: took 886.536134ms to provisionDockerMachine
	I0316 00:16:34.414070  123537 start.go:293] postStartSetup for "embed-certs-666637" (driver="kvm2")
	I0316 00:16:34.414081  123537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:34.414101  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.414464  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:34.414497  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.417211  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417482  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.417520  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417617  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.417804  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.417990  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.418126  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.498223  123537 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:34.502954  123537 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:34.502989  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:34.503068  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:34.503156  123537 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:34.503258  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:34.513065  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:34.537606  123537 start.go:296] duration metric: took 123.521431ms for postStartSetup
	I0316 00:16:34.537657  123537 fix.go:56] duration metric: took 18.450434099s for fixHost
	I0316 00:16:34.537679  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.540574  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.540908  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.540950  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.541086  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.541302  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541471  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541609  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.541803  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.542009  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.542025  123537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:34.648381  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548194.613058580
	
	I0316 00:16:34.648419  123537 fix.go:216] guest clock: 1710548194.613058580
	I0316 00:16:34.648427  123537 fix.go:229] Guest: 2024-03-16 00:16:34.61305858 +0000 UTC Remote: 2024-03-16 00:16:34.537661993 +0000 UTC m=+286.854063579 (delta=75.396587ms)
	I0316 00:16:34.648454  123537 fix.go:200] guest clock delta is within tolerance: 75.396587ms
	I0316 00:16:34.648459  123537 start.go:83] releasing machines lock for "embed-certs-666637", held for 18.561300744s
	I0316 00:16:34.648483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.648770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:34.651350  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651748  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.651794  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651926  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652573  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652810  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652907  123537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:34.652965  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.653064  123537 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:34.653090  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.655796  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656121  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656149  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656170  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656281  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656461  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.656562  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656586  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656640  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.656739  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656807  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.656883  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.657023  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.657249  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.759596  123537 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:34.765571  123537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:34.915897  123537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:34.923372  123537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:34.923471  123537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:34.940579  123537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:34.940613  123537 start.go:494] detecting cgroup driver to use...
	I0316 00:16:34.940699  123537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:34.957640  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:34.971525  123537 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:34.971598  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:34.987985  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:35.001952  123537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:35.124357  123537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:35.273948  123537 docker.go:233] disabling docker service ...
	I0316 00:16:35.274037  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:35.291073  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:35.311209  123537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:35.460630  123537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:35.581263  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:35.596460  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:35.617992  123537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:35.618042  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.628372  123537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:35.628426  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.639487  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.650397  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.662065  123537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:35.676003  123537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:35.686159  123537 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:35.686241  123537 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:35.699814  123537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:35.710182  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:35.831831  123537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:35.977556  123537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:35.977638  123537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:35.982729  123537 start.go:562] Will wait 60s for crictl version
	I0316 00:16:35.982806  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:16:35.986695  123537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:36.023299  123537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:36.023412  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.055441  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.090313  123537 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:36.091622  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:36.094687  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095062  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:36.095098  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095277  123537 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:36.099781  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:36.113522  123537 kubeadm.go:877] updating cluster {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:36.113674  123537 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:36.113743  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:36.152208  123537 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:36.152300  123537 ssh_runner.go:195] Run: which lz4
	I0316 00:16:36.156802  123537 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:36.161430  123537 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:36.161472  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:35.911510  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting to get IP...
	I0316 00:16:35.912562  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.912986  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.913064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:35.912955  124655 retry.go:31] will retry after 248.147893ms: waiting for machine to come up
	I0316 00:16:36.162476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163094  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163127  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.163032  124655 retry.go:31] will retry after 387.219214ms: waiting for machine to come up
	I0316 00:16:36.551678  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552203  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552236  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.552178  124655 retry.go:31] will retry after 391.385671ms: waiting for machine to come up
	I0316 00:16:36.945741  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946275  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.946216  124655 retry.go:31] will retry after 470.449619ms: waiting for machine to come up
	I0316 00:16:37.417836  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418324  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418353  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.418259  124655 retry.go:31] will retry after 508.962644ms: waiting for machine to come up
	I0316 00:16:37.929194  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929710  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.929671  124655 retry.go:31] will retry after 877.538639ms: waiting for machine to come up
	I0316 00:16:38.808551  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809061  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809100  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:38.809002  124655 retry.go:31] will retry after 754.319242ms: waiting for machine to come up
	I0316 00:16:39.565060  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565475  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565512  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:39.565411  124655 retry.go:31] will retry after 1.472475348s: waiting for machine to come up
	I0316 00:16:37.946470  123537 crio.go:444] duration metric: took 1.789700065s to copy over tarball
	I0316 00:16:37.946552  123537 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:40.497841  123537 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551257887s)
	I0316 00:16:40.497867  123537 crio.go:451] duration metric: took 2.551367803s to extract the tarball
	I0316 00:16:40.497875  123537 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:40.539695  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:40.588945  123537 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:40.588974  123537 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:40.588983  123537 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.28.4 crio true true} ...
	I0316 00:16:40.589125  123537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-666637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:40.589216  123537 ssh_runner.go:195] Run: crio config
	I0316 00:16:40.641673  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:40.641702  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:40.641719  123537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:40.641754  123537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-666637 NodeName:embed-certs-666637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:40.641939  123537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-666637"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:40.642024  123537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:40.652461  123537 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:40.652539  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:40.662114  123537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 00:16:40.679782  123537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:40.701982  123537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0316 00:16:40.720088  123537 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:40.724199  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:40.737133  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:40.860343  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:40.878437  123537 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637 for IP: 192.168.61.91
	I0316 00:16:40.878466  123537 certs.go:194] generating shared ca certs ...
	I0316 00:16:40.878489  123537 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:40.878690  123537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:40.878766  123537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:40.878779  123537 certs.go:256] generating profile certs ...
	I0316 00:16:40.878888  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/client.key
	I0316 00:16:40.878990  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key.07955952
	I0316 00:16:40.879059  123537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key
	I0316 00:16:40.879178  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:40.879225  123537 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:40.879239  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:40.879271  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:40.879302  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:40.879352  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:40.879409  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:40.880141  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:40.924047  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:40.962441  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:41.000283  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:41.034353  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 00:16:41.069315  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:16:41.100325  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:16:41.129285  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:16:41.155899  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:16:41.180657  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:16:41.205961  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:16:41.231886  123537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:16:41.249785  123537 ssh_runner.go:195] Run: openssl version
	I0316 00:16:41.255703  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:16:41.266968  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271536  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271595  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.277460  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:16:41.288854  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:16:41.300302  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305189  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305256  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.311200  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:16:41.322784  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:16:41.334879  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339774  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339837  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.345746  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:16:41.357661  123537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:16:41.362469  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:16:41.368875  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:16:41.375759  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:16:41.382518  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:16:41.388629  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:16:41.394882  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:16:41.401114  123537 kubeadm.go:391] StartCluster: {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:16:41.401243  123537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:16:41.401304  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.449499  123537 cri.go:89] found id: ""
	I0316 00:16:41.449590  123537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:16:41.461139  123537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:16:41.461165  123537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:16:41.461173  123537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:16:41.461243  123537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:16:41.473648  123537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:16:41.474652  123537 kubeconfig.go:125] found "embed-certs-666637" server: "https://192.168.61.91:8443"
	I0316 00:16:41.476724  123537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:16:41.488387  123537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0316 00:16:41.488426  123537 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:16:41.488439  123537 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:16:41.488485  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.526197  123537 cri.go:89] found id: ""
	I0316 00:16:41.526283  123537 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:16:41.545489  123537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:16:41.555977  123537 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:16:41.555998  123537 kubeadm.go:156] found existing configuration files:
	
	I0316 00:16:41.556048  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:16:41.565806  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:16:41.565891  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:16:41.575646  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:16:41.585269  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:16:41.585329  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:16:41.595336  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.605081  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:16:41.605144  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.615182  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:16:41.624781  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:16:41.624837  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:16:41.634852  123537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:16:41.644749  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.748782  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.477775  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.688730  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.039441  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039924  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:41.039885  124655 retry.go:31] will retry after 1.408692905s: waiting for machine to come up
	I0316 00:16:42.449971  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450402  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:42.450355  124655 retry.go:31] will retry after 1.539639877s: waiting for machine to come up
	I0316 00:16:43.992314  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992833  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992869  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:43.992777  124655 retry.go:31] will retry after 2.297369864s: waiting for machine to come up
	I0316 00:16:42.777223  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.944089  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:16:42.944193  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.445082  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.945117  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.963812  123537 api_server.go:72] duration metric: took 1.019723734s to wait for apiserver process to appear ...
	I0316 00:16:43.963845  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:16:43.963871  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.924208  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.924258  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.924278  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.953212  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.953245  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.964449  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.988201  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.988232  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:47.464502  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.469385  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.469421  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:47.964483  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.970448  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.970492  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:48.463984  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:48.468908  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:16:48.476120  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:16:48.476153  123537 api_server.go:131] duration metric: took 4.512298176s to wait for apiserver health ...
	I0316 00:16:48.476164  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:48.476172  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:48.478076  123537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:16:48.479565  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:16:48.490129  123537 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:16:48.516263  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:16:48.532732  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:16:48.532768  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:16:48.532778  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:16:48.532788  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:16:48.532795  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:16:48.532801  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:16:48.532808  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:16:48.532815  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:16:48.532822  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:16:48.532833  123537 system_pods.go:74] duration metric: took 16.547677ms to wait for pod list to return data ...
	I0316 00:16:48.532845  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:16:48.535945  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:16:48.535989  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:16:48.536006  123537 node_conditions.go:105] duration metric: took 3.154184ms to run NodePressure ...
	I0316 00:16:48.536027  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:48.733537  123537 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739166  123537 kubeadm.go:733] kubelet initialised
	I0316 00:16:48.739196  123537 kubeadm.go:734] duration metric: took 5.63118ms waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739209  123537 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:48.744724  123537 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.750261  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750299  123537 pod_ready.go:81] duration metric: took 5.547917ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.750310  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750323  123537 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.755340  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755362  123537 pod_ready.go:81] duration metric: took 5.029639ms for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.755371  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755379  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.761104  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761128  123537 pod_ready.go:81] duration metric: took 5.740133ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.761138  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761146  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.921215  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921244  123537 pod_ready.go:81] duration metric: took 160.08501ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.921254  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921260  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.319922  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319954  123537 pod_ready.go:81] duration metric: took 398.685799ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.319963  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319969  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.720866  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720922  123537 pod_ready.go:81] duration metric: took 400.944023ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.720948  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720967  123537 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:50.120836  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120865  123537 pod_ready.go:81] duration metric: took 399.883676ms for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:50.120875  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120882  123537 pod_ready.go:38] duration metric: took 1.381661602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:50.120923  123537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:16:50.133619  123537 ops.go:34] apiserver oom_adj: -16
	I0316 00:16:50.133653  123537 kubeadm.go:591] duration metric: took 8.672472438s to restartPrimaryControlPlane
	I0316 00:16:50.133663  123537 kubeadm.go:393] duration metric: took 8.732557685s to StartCluster
	I0316 00:16:50.133684  123537 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.133760  123537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:16:50.135355  123537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.135613  123537 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:16:50.140637  123537 out.go:177] * Verifying Kubernetes components...
	I0316 00:16:50.135727  123537 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:16:50.135843  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:50.142015  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:50.142027  123537 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-666637"
	I0316 00:16:50.142050  123537 addons.go:69] Setting default-storageclass=true in profile "embed-certs-666637"
	I0316 00:16:50.142070  123537 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-666637"
	W0316 00:16:50.142079  123537 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:16:50.142090  123537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-666637"
	I0316 00:16:50.142092  123537 addons.go:69] Setting metrics-server=true in profile "embed-certs-666637"
	I0316 00:16:50.142121  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142124  123537 addons.go:234] Setting addon metrics-server=true in "embed-certs-666637"
	W0316 00:16:50.142136  123537 addons.go:243] addon metrics-server should already be in state true
	I0316 00:16:50.142168  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142439  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142468  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142558  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142577  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.156773  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0316 00:16:50.156804  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0316 00:16:50.157267  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157268  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157591  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0316 00:16:50.157835  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157841  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157857  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157858  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157925  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.158223  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158226  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158404  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.158419  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.158731  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158753  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158795  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158828  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158932  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.159126  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.162347  123537 addons.go:234] Setting addon default-storageclass=true in "embed-certs-666637"
	W0316 00:16:50.162365  123537 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:16:50.162392  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.162612  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.162649  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.172299  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0316 00:16:50.172676  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.173173  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.173193  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.173547  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.173770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.175668  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.177676  123537 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:16:50.175968  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0316 00:16:50.176110  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0316 00:16:50.179172  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:16:50.179189  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:16:50.179206  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.179453  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179538  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179888  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.179909  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180021  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.180037  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180266  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180385  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180613  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.180788  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.180811  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.185060  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.192504  123537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:16:46.292804  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293326  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293363  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:46.293267  124655 retry.go:31] will retry after 2.301997121s: waiting for machine to come up
	I0316 00:16:48.596337  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596777  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:48.596731  124655 retry.go:31] will retry after 3.159447069s: waiting for machine to come up
	I0316 00:16:50.186146  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.186717  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.193945  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.193971  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.194051  123537 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.194079  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:16:50.194100  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.194103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.194264  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.194420  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.196511  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0316 00:16:50.197160  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.197580  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.197598  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.197658  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198007  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.198039  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.198038  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198235  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.198237  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.198435  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.198612  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.198772  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.200270  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.200540  123537 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.200554  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:16:50.200566  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.203147  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203634  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.203655  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203765  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.203966  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.204201  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.204335  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.317046  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:50.340203  123537 node_ready.go:35] waiting up to 6m0s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:50.415453  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.423732  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.424648  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:16:50.424663  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:16:50.470134  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:16:50.470164  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:16:50.518806  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:50.518833  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:16:50.570454  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:51.627153  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203388401s)
	I0316 00:16:51.627211  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627222  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627419  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211925303s)
	I0316 00:16:51.627468  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627533  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627595  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627609  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627620  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627549  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627859  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627885  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627895  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627914  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627956  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627976  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.629345  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.633811  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.633831  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.634043  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.634081  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726400  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.15588774s)
	I0316 00:16:51.726458  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726472  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.726820  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.726853  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.726875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726889  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726898  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.727178  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.727193  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.727206  123537 addons.go:470] Verifying addon metrics-server=true in "embed-certs-666637"
	I0316 00:16:51.729277  123537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0316 00:16:51.730645  123537 addons.go:505] duration metric: took 1.594919212s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0316 00:16:52.344107  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:51.757133  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757570  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Found IP for machine: 192.168.72.198
	I0316 00:16:51.757603  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has current primary IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserving static IP address...
	I0316 00:16:51.758067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.758093  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | skip adding static IP to network mk-default-k8s-diff-port-313436 - found existing host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"}
	I0316 00:16:51.758110  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserved static IP address: 192.168.72.198
	I0316 00:16:51.758120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Getting to WaitForSSH function...
	I0316 00:16:51.758138  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for SSH to be available...
	I0316 00:16:51.760276  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760596  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.760632  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760711  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH client type: external
	I0316 00:16:51.760744  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa (-rw-------)
	I0316 00:16:51.760797  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:51.760820  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | About to run SSH command:
	I0316 00:16:51.760861  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | exit 0
	I0316 00:16:51.887432  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:51.887829  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetConfigRaw
	I0316 00:16:51.888471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:51.891514  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.891923  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.891949  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.892232  123819 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:16:51.892502  123819 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:51.892527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:51.892782  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:51.895025  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.895367  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:51.895683  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895841  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:51.896178  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:51.896361  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:51.896372  123819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:52.012107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:52.012154  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012405  123819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-313436"
	I0316 00:16:52.012434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012640  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.015307  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.015823  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.015847  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.016055  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.016266  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016433  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016565  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.016758  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.016976  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.016992  123819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313436 && echo "default-k8s-diff-port-313436" | sudo tee /etc/hostname
	I0316 00:16:52.149152  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313436
	
	I0316 00:16:52.149180  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.152472  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.152852  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.152896  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.153056  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.153239  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153412  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.153837  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.154077  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.154108  123819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:52.285258  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:52.285290  123819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:52.285313  123819 buildroot.go:174] setting up certificates
	I0316 00:16:52.285323  123819 provision.go:84] configureAuth start
	I0316 00:16:52.285331  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.285631  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:52.288214  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288494  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.288527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288699  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.290965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291354  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.291380  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291571  123819 provision.go:143] copyHostCerts
	I0316 00:16:52.291644  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:52.291658  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:52.291719  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:52.291827  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:52.291839  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:52.291868  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:52.291966  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:52.291978  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:52.292005  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:52.292095  123819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313436 san=[127.0.0.1 192.168.72.198 default-k8s-diff-port-313436 localhost minikube]
	I0316 00:16:52.536692  123819 provision.go:177] copyRemoteCerts
	I0316 00:16:52.536756  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:52.536790  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.539525  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.539805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.539837  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.540067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.540264  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.540424  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.540599  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:52.629139  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:52.655092  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0316 00:16:52.681372  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:52.706496  123819 provision.go:87] duration metric: took 421.160351ms to configureAuth
	I0316 00:16:52.706529  123819 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:52.706737  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:52.706828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.709743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710173  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.710198  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710403  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.710616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710822  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710983  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.711148  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.711359  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.711380  123819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:53.005107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:53.005138  123819 machine.go:97] duration metric: took 1.112619102s to provisionDockerMachine
	I0316 00:16:53.005153  123819 start.go:293] postStartSetup for "default-k8s-diff-port-313436" (driver="kvm2")
	I0316 00:16:53.005166  123819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:53.005185  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.005547  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:53.005581  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.008749  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009170  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.009196  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009416  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.009617  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.009795  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.009973  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.100468  123819 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:53.105158  123819 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:53.105181  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:53.105243  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:53.105314  123819 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:53.105399  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:53.116078  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:53.142400  123819 start.go:296] duration metric: took 137.231635ms for postStartSetup
	I0316 00:16:53.142454  123819 fix.go:56] duration metric: took 18.493815855s for fixHost
	I0316 00:16:53.142483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.145282  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145658  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.145688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145878  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.146104  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146288  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146445  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.146625  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:53.146820  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:53.146834  123819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:53.260232  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548213.237261690
	
	I0316 00:16:53.260255  123819 fix.go:216] guest clock: 1710548213.237261690
	I0316 00:16:53.260262  123819 fix.go:229] Guest: 2024-03-16 00:16:53.23726169 +0000 UTC Remote: 2024-03-16 00:16:53.142460792 +0000 UTC m=+262.706636561 (delta=94.800898ms)
	I0316 00:16:53.260292  123819 fix.go:200] guest clock delta is within tolerance: 94.800898ms
	I0316 00:16:53.260298  123819 start.go:83] releasing machines lock for "default-k8s-diff-port-313436", held for 18.611697781s
	I0316 00:16:53.260323  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.260629  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:53.263641  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264002  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.264032  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.264889  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265217  123819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:53.265273  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.265404  123819 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:53.265434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.268274  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268538  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268684  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268727  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.268969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268995  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.269113  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269206  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.269298  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269419  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.269476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269572  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.372247  123819 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:53.378643  123819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:53.527036  123819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:53.534220  123819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:53.534312  123819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:53.554856  123819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:53.554900  123819 start.go:494] detecting cgroup driver to use...
	I0316 00:16:53.554971  123819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:53.580723  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:53.599919  123819 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:53.599996  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:53.613989  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:53.628748  123819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:53.745409  123819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:53.906668  123819 docker.go:233] disabling docker service ...
	I0316 00:16:53.906733  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:53.928452  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:53.949195  123819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:54.118868  123819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:54.250006  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:54.264754  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:54.285825  123819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:54.285890  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.298522  123819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:54.298590  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.311118  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.323928  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.336128  123819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:54.348715  123819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:54.359657  123819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:54.359718  123819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:54.376411  123819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:54.388136  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:54.530444  123819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:54.681895  123819 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:54.681984  123819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:54.687334  123819 start.go:562] Will wait 60s for crictl version
	I0316 00:16:54.687398  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:16:54.691443  123819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:54.730408  123819 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:54.730505  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.761591  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.792351  123819 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:54.793693  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:54.797023  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797439  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:54.797471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797665  123819 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:54.802065  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:54.815168  123819 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:54.815285  123819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:54.815345  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:54.855493  123819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:54.855553  123819 ssh_runner.go:195] Run: which lz4
	I0316 00:16:54.860096  123819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:54.865644  123819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:54.865675  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:54.345117  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:56.346342  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:57.346164  123537 node_ready.go:49] node "embed-certs-666637" has status "Ready":"True"
	I0316 00:16:57.346194  123537 node_ready.go:38] duration metric: took 7.005950923s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:57.346207  123537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:57.361331  123537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377726  123537 pod_ready.go:92] pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace has status "Ready":"True"
	I0316 00:16:57.377750  123537 pod_ready.go:81] duration metric: took 16.388353ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377760  123537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:16:56.676506  123819 crio.go:444] duration metric: took 1.816442841s to copy over tarball
	I0316 00:16:56.676609  123819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:59.338617  123819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661966532s)
	I0316 00:16:59.338655  123819 crio.go:451] duration metric: took 2.662115388s to extract the tarball
	I0316 00:16:59.338665  123819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:59.387693  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:59.453534  123819 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:59.453565  123819 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:59.453575  123819 kubeadm.go:928] updating node { 192.168.72.198 8444 v1.28.4 crio true true} ...
	I0316 00:16:59.453744  123819 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-313436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:59.453841  123819 ssh_runner.go:195] Run: crio config
	I0316 00:16:59.518492  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:16:59.518525  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:59.518543  123819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:59.518572  123819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.198 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313436 NodeName:default-k8s-diff-port-313436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:59.518791  123819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.198
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313436"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:59.518876  123819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:59.529778  123819 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:59.529860  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:59.542186  123819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0316 00:16:59.563037  123819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:59.585167  123819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 00:16:59.607744  123819 ssh_runner.go:195] Run: grep 192.168.72.198	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:59.612687  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:59.628607  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:59.767487  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:59.786494  123819 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436 for IP: 192.168.72.198
	I0316 00:16:59.786520  123819 certs.go:194] generating shared ca certs ...
	I0316 00:16:59.786545  123819 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:59.786688  123819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:59.786722  123819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:59.786728  123819 certs.go:256] generating profile certs ...
	I0316 00:16:59.786827  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.key
	I0316 00:16:59.786975  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key.254d5830
	I0316 00:16:59.787049  123819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key
	I0316 00:16:59.787204  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:59.787248  123819 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:59.787262  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:59.787295  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:59.787351  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:59.787386  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:59.787449  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:59.788288  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:59.824257  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:59.859470  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:59.904672  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:59.931832  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0316 00:16:59.965654  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:00.006949  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:00.039120  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:00.071341  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:00.095585  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:00.122165  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:00.149982  123819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:00.170019  123819 ssh_runner.go:195] Run: openssl version
	I0316 00:17:00.176232  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:00.188738  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193708  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193780  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.200433  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:00.215116  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:00.228871  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234074  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234141  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.240553  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:00.252454  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:00.264690  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269493  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269573  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.275584  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:00.287859  123819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:00.292474  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:00.298744  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:00.304793  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:00.311156  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:00.317777  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:00.324148  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:00.330667  123819 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:00.330763  123819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:00.330813  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.374868  123819 cri.go:89] found id: ""
	I0316 00:17:00.374961  123819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:00.386218  123819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:00.386240  123819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:00.386245  123819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:00.386288  123819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:00.397129  123819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:00.398217  123819 kubeconfig.go:125] found "default-k8s-diff-port-313436" server: "https://192.168.72.198:8444"
	I0316 00:17:00.400506  123819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:00.411430  123819 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.198
	I0316 00:17:00.411462  123819 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:00.411477  123819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:00.411528  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.448545  123819 cri.go:89] found id: ""
	I0316 00:17:00.448619  123819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:00.469230  123819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:00.480622  123819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:00.480644  123819 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:00.480695  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0316 00:16:59.384420  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.094272  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.390117  123537 pod_ready.go:92] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.390145  123537 pod_ready.go:81] duration metric: took 5.012377671s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.390156  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398207  123537 pod_ready.go:92] pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.398236  123537 pod_ready.go:81] duration metric: took 8.071855ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398248  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405415  123537 pod_ready.go:92] pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.405443  123537 pod_ready.go:81] duration metric: took 7.186495ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405453  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412646  123537 pod_ready.go:92] pod "kube-proxy-8fpc5" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.412665  123537 pod_ready.go:81] duration metric: took 7.204465ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412673  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606336  123537 pod_ready.go:92] pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.606369  123537 pod_ready.go:81] duration metric: took 193.687951ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606384  123537 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:00.492088  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:00.743504  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:00.756322  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0316 00:17:00.766476  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:00.766545  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:00.776849  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.786610  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:00.786676  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.797455  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0316 00:17:00.808026  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:00.808083  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:00.819306  123819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:00.834822  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:00.962203  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.535753  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.762322  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.843195  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.944855  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:01.944971  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.446047  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.945791  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.983641  123819 api_server.go:72] duration metric: took 1.038786332s to wait for apiserver process to appear ...
	I0316 00:17:02.983680  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:02.983704  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:04.615157  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:07.114447  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:06.343729  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.343763  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.343786  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.364621  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.364659  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.483852  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.491403  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.491433  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:06.983931  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.994258  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.994296  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.483821  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.506265  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:07.506301  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.983846  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.988700  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:17:07.995996  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:17:07.996021  123819 api_server.go:131] duration metric: took 5.012333318s to wait for apiserver health ...
	I0316 00:17:07.996032  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:17:07.996041  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:07.998091  123819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:17:07.999628  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:17:08.010263  123819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:17:08.041667  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:17:08.053611  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:17:08.053656  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:17:08.053668  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:17:08.053681  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:17:08.053694  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:17:08.053706  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:17:08.053717  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:17:08.053730  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:17:08.053739  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:17:08.053747  123819 system_pods.go:74] duration metric: took 12.054433ms to wait for pod list to return data ...
	I0316 00:17:08.053763  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:17:08.057781  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:17:08.057808  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:17:08.057818  123819 node_conditions.go:105] duration metric: took 4.047698ms to run NodePressure ...
	I0316 00:17:08.057837  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:08.282870  123819 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288338  123819 kubeadm.go:733] kubelet initialised
	I0316 00:17:08.288359  123819 kubeadm.go:734] duration metric: took 5.456436ms waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288367  123819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:08.294256  123819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.302762  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302802  123819 pod_ready.go:81] duration metric: took 8.523485ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.302814  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302823  123819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.309581  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309604  123819 pod_ready.go:81] duration metric: took 6.77179ms for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.309617  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309625  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.315399  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315419  123819 pod_ready.go:81] duration metric: took 5.78558ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.315428  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315434  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.445776  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445808  123819 pod_ready.go:81] duration metric: took 130.363739ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.445821  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445829  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.846181  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846228  123819 pod_ready.go:81] duration metric: took 400.382095ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.846243  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846251  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.245568  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245599  123819 pod_ready.go:81] duration metric: took 399.329058ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.245612  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245618  123819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.646855  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646888  123819 pod_ready.go:81] duration metric: took 401.262603ms for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.646901  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646909  123819 pod_ready.go:38] duration metric: took 1.358531936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:09.646926  123819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:17:09.659033  123819 ops.go:34] apiserver oom_adj: -16
	I0316 00:17:09.659059  123819 kubeadm.go:591] duration metric: took 9.272806311s to restartPrimaryControlPlane
	I0316 00:17:09.659070  123819 kubeadm.go:393] duration metric: took 9.328414192s to StartCluster
	I0316 00:17:09.659091  123819 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.659166  123819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:09.661439  123819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.661729  123819 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:17:09.663462  123819 out.go:177] * Verifying Kubernetes components...
	I0316 00:17:09.661800  123819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:17:09.661986  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:17:09.664841  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:09.664874  123819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664839  123819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664964  123819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.664980  123819 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:17:09.664847  123819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.665023  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.665037  123819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.665053  123819 addons.go:243] addon metrics-server should already be in state true
	I0316 00:17:09.665084  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.664922  123819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313436"
	I0316 00:17:09.665349  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665377  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665445  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665474  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665607  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665637  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.680337  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0316 00:17:09.680351  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0316 00:17:09.680799  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.680939  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.681331  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681366  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681541  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681560  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681736  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.681974  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.682359  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682407  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.682461  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682494  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.683660  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0316 00:17:09.684088  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.684575  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.684600  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.684992  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.685218  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.688973  123819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.688994  123819 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:17:09.689028  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.689372  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.689397  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.698126  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0316 00:17:09.698527  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.699052  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.699079  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.699407  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.699606  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.700389  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0316 00:17:09.700824  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.701308  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.701327  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.701610  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.701681  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.704168  123819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:17:09.701891  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.704403  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0316 00:17:09.706042  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:17:09.706076  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:17:09.706102  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.706988  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.707805  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.707831  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.708465  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.708556  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.709451  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.709500  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.709520  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.711354  123819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:09.709911  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.710103  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.712849  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.712865  123819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:09.712886  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:17:09.712910  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.713010  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.713202  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.713365  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.715688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716029  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.716064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716260  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.716437  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.716662  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.716826  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.725309  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0316 00:17:09.725659  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.726175  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.726191  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.726492  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.726665  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.728459  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.728721  123819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.728739  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:17:09.728753  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.732122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732546  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.732576  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732733  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.732908  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.733064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.733206  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.838182  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:09.857248  123819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:09.956751  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:17:09.956775  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:17:09.982142  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.992293  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:17:09.992319  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:17:10.000878  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:10.035138  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:10.035171  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:17:10.066721  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:11.153759  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171576504s)
	I0316 00:17:11.153815  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.153828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154237  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154241  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154262  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.154271  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.154281  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154569  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154601  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154609  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165531  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.165579  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.165868  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.165922  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165879  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536530  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.469764101s)
	I0316 00:17:11.536596  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536607  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536648  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53572281s)
	I0316 00:17:11.536694  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536713  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536963  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536988  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536995  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537001  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537005  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537010  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537013  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537019  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537218  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537365  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537376  123819 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-313436"
	I0316 00:17:11.537404  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537425  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.539481  123819 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0316 00:17:09.114699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:11.613507  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:13.204814  123454 start.go:364] duration metric: took 52.116735477s to acquireMachinesLock for "no-preload-238598"
	I0316 00:17:13.204888  123454 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:17:13.204900  123454 fix.go:54] fixHost starting: 
	I0316 00:17:13.205405  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:13.205446  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:13.222911  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0316 00:17:13.223326  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:13.223784  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:17:13.223811  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:13.224153  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:13.224338  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:13.224507  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:17:13.226028  123454 fix.go:112] recreateIfNeeded on no-preload-238598: state=Stopped err=<nil>
	I0316 00:17:13.226051  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	W0316 00:17:13.226232  123454 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:17:13.227865  123454 out.go:177] * Restarting existing kvm2 VM for "no-preload-238598" ...
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:11.540876  123819 addons.go:505] duration metric: took 1.87908534s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0316 00:17:11.862772  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.866333  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.229181  123454 main.go:141] libmachine: (no-preload-238598) Calling .Start
	I0316 00:17:13.229409  123454 main.go:141] libmachine: (no-preload-238598) Ensuring networks are active...
	I0316 00:17:13.230257  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network default is active
	I0316 00:17:13.230618  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network mk-no-preload-238598 is active
	I0316 00:17:13.231135  123454 main.go:141] libmachine: (no-preload-238598) Getting domain xml...
	I0316 00:17:13.232023  123454 main.go:141] libmachine: (no-preload-238598) Creating domain...
	I0316 00:17:14.513800  123454 main.go:141] libmachine: (no-preload-238598) Waiting to get IP...
	I0316 00:17:14.514838  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.515446  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.515520  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.515407  125029 retry.go:31] will retry after 275.965955ms: waiting for machine to come up
	I0316 00:17:14.793095  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.793594  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.793721  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.793667  125029 retry.go:31] will retry after 347.621979ms: waiting for machine to come up
	I0316 00:17:15.143230  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.143869  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.143909  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.143820  125029 retry.go:31] will retry after 301.441766ms: waiting for machine to come up
	I0316 00:17:15.446476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.446917  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.446964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.446865  125029 retry.go:31] will retry after 431.207345ms: waiting for machine to come up
	I0316 00:17:13.615911  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.616381  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:17.618352  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:16.362143  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:16.866488  123819 node_ready.go:49] node "default-k8s-diff-port-313436" has status "Ready":"True"
	I0316 00:17:16.866522  123819 node_ready.go:38] duration metric: took 7.00923342s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:16.866535  123819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:16.881909  123819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897574  123819 pod_ready.go:92] pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:16.897617  123819 pod_ready.go:81] duration metric: took 15.618728ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897630  123819 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:18.910740  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.879693  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.880186  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.880222  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.880148  125029 retry.go:31] will retry after 747.650888ms: waiting for machine to come up
	I0316 00:17:16.629378  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:16.631312  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:16.631352  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:16.631193  125029 retry.go:31] will retry after 670.902171ms: waiting for machine to come up
	I0316 00:17:17.304282  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:17.304704  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:17.304751  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:17.304658  125029 retry.go:31] will retry after 1.160879196s: waiting for machine to come up
	I0316 00:17:18.466662  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:18.467103  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:18.467136  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:18.467049  125029 retry.go:31] will retry after 948.597188ms: waiting for machine to come up
	I0316 00:17:19.417144  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:19.417623  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:19.417657  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:19.417561  125029 retry.go:31] will retry after 1.263395738s: waiting for machine to come up
	I0316 00:17:20.289713  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.613643  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:21.865146  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.241535  123819 pod_ready.go:92] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.241561  123819 pod_ready.go:81] duration metric: took 5.34392174s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.241573  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247469  123819 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.247501  123819 pod_ready.go:81] duration metric: took 5.919787ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247515  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756151  123819 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.756180  123819 pod_ready.go:81] duration metric: took 508.652978ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756194  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762214  123819 pod_ready.go:92] pod "kube-proxy-btmmm" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.762254  123819 pod_ready.go:81] duration metric: took 6.041426ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762268  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769644  123819 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.769668  123819 pod_ready.go:81] duration metric: took 7.391813ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769681  123819 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:24.780737  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.682443  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:20.798804  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:20.798840  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:20.682821  125029 retry.go:31] will retry after 1.834378571s: waiting for machine to come up
	I0316 00:17:22.518539  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:22.518997  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:22.519027  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:22.518945  125029 retry.go:31] will retry after 1.944866033s: waiting for machine to come up
	I0316 00:17:24.466332  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:24.466902  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:24.466930  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:24.466847  125029 retry.go:31] will retry after 3.4483736s: waiting for machine to come up
	I0316 00:17:24.615642  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.113920  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.278017  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:29.777128  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.919457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:27.919931  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:27.919964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:27.919891  125029 retry.go:31] will retry after 3.122442649s: waiting for machine to come up
	I0316 00:17:29.613500  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.613674  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.276855  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:34.277228  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.044512  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:31.044939  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:31.044970  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:31.044884  125029 retry.go:31] will retry after 4.529863895s: waiting for machine to come up
	I0316 00:17:34.112266  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:36.118023  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:35.576311  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.576834  123454 main.go:141] libmachine: (no-preload-238598) Found IP for machine: 192.168.50.137
	I0316 00:17:35.576858  123454 main.go:141] libmachine: (no-preload-238598) Reserving static IP address...
	I0316 00:17:35.576875  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has current primary IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.577312  123454 main.go:141] libmachine: (no-preload-238598) Reserved static IP address: 192.168.50.137
	I0316 00:17:35.577355  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.577365  123454 main.go:141] libmachine: (no-preload-238598) Waiting for SSH to be available...
	I0316 00:17:35.577404  123454 main.go:141] libmachine: (no-preload-238598) DBG | skip adding static IP to network mk-no-preload-238598 - found existing host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"}
	I0316 00:17:35.577419  123454 main.go:141] libmachine: (no-preload-238598) DBG | Getting to WaitForSSH function...
	I0316 00:17:35.579640  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580061  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.580108  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580210  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH client type: external
	I0316 00:17:35.580269  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa (-rw-------)
	I0316 00:17:35.580303  123454 main.go:141] libmachine: (no-preload-238598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:35.580319  123454 main.go:141] libmachine: (no-preload-238598) DBG | About to run SSH command:
	I0316 00:17:35.580339  123454 main.go:141] libmachine: (no-preload-238598) DBG | exit 0
	I0316 00:17:35.711373  123454 main.go:141] libmachine: (no-preload-238598) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:35.711791  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetConfigRaw
	I0316 00:17:35.712598  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:35.715455  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.715929  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.715954  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.716326  123454 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:17:35.716525  123454 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:35.716551  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:35.716802  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.719298  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719612  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.719644  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719780  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.720005  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720178  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720315  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.720487  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.720666  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.720677  123454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:35.835733  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:35.835760  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836004  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:17:35.836033  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836240  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.839024  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839413  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.839445  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839627  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.839811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.839977  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.840133  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.840279  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.840485  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.840504  123454 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-238598 && echo "no-preload-238598" | sudo tee /etc/hostname
	I0316 00:17:35.976590  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-238598
	
	I0316 00:17:35.976624  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.979354  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979689  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.979720  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979879  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.980104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980267  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980445  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.980602  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.980796  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.980815  123454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-238598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-238598/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-238598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:36.106710  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:36.106750  123454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:36.106774  123454 buildroot.go:174] setting up certificates
	I0316 00:17:36.106786  123454 provision.go:84] configureAuth start
	I0316 00:17:36.106800  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:36.107104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.110050  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110431  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.110476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110592  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.113019  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113366  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.113391  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113517  123454 provision.go:143] copyHostCerts
	I0316 00:17:36.113595  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:36.113619  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:36.113699  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:36.113898  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:36.113911  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:36.113964  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:36.114051  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:36.114063  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:36.114089  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:36.114155  123454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.no-preload-238598 san=[127.0.0.1 192.168.50.137 localhost minikube no-preload-238598]
	I0316 00:17:36.239622  123454 provision.go:177] copyRemoteCerts
	I0316 00:17:36.239706  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:36.239736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.242440  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.242806  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.242841  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.243086  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.243279  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.243482  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.243623  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.330601  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:36.359600  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:17:36.384258  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:36.409195  123454 provision.go:87] duration metric: took 302.39571ms to configureAuth
	I0316 00:17:36.409239  123454 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:36.409440  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:17:36.409539  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.412280  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412618  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.412652  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.413039  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413217  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413366  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.413576  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.413803  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.413823  123454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:36.703300  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:36.703365  123454 machine.go:97] duration metric: took 986.82471ms to provisionDockerMachine
	I0316 00:17:36.703418  123454 start.go:293] postStartSetup for "no-preload-238598" (driver="kvm2")
	I0316 00:17:36.703440  123454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:36.703474  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.703838  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:36.703880  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.706655  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707019  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.707057  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707237  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.707470  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.707626  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.707822  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.794605  123454 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:36.799121  123454 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:36.799151  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:36.799222  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:36.799298  123454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:36.799423  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:36.808805  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:36.834244  123454 start.go:296] duration metric: took 130.803052ms for postStartSetup
	I0316 00:17:36.834290  123454 fix.go:56] duration metric: took 23.629390369s for fixHost
	I0316 00:17:36.834318  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.837197  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837643  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.837684  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837926  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.838155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838360  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838533  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.838721  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.838965  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.838982  123454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:36.956309  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548256.900043121
	
	I0316 00:17:36.956352  123454 fix.go:216] guest clock: 1710548256.900043121
	I0316 00:17:36.956366  123454 fix.go:229] Guest: 2024-03-16 00:17:36.900043121 +0000 UTC Remote: 2024-03-16 00:17:36.83429667 +0000 UTC m=+356.318603082 (delta=65.746451ms)
	I0316 00:17:36.956398  123454 fix.go:200] guest clock delta is within tolerance: 65.746451ms
	I0316 00:17:36.956425  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 23.751563248s
	I0316 00:17:36.956472  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.956736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.960077  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960494  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.960524  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960678  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961247  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961454  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961522  123454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:36.961588  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.961730  123454 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:36.961756  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.964457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964801  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.964834  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964905  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965346  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965374  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.965406  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965518  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.965609  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965681  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.965739  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965866  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.966034  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:37.077559  123454 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:37.084485  123454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:37.229503  123454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:37.236783  123454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:37.236862  123454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:37.255248  123454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:37.255275  123454 start.go:494] detecting cgroup driver to use...
	I0316 00:17:37.255377  123454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:37.272795  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:37.289822  123454 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:37.289885  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:37.306082  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:37.322766  123454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:37.448135  123454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:37.614316  123454 docker.go:233] disabling docker service ...
	I0316 00:17:37.614381  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:37.630091  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:37.645025  123454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:37.773009  123454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:37.891459  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:37.906829  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:37.927910  123454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:17:37.927982  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.939166  123454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:37.939226  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.950487  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.961547  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.972402  123454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:37.983413  123454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:37.993080  123454 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:37.993147  123454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:38.007746  123454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:38.017917  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:38.158718  123454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:38.329423  123454 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:38.329520  123454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:38.334518  123454 start.go:562] Will wait 60s for crictl version
	I0316 00:17:38.334570  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.338570  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:38.375688  123454 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:38.375779  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.408167  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.444754  123454 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.277480  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.281375  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.446078  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:38.448885  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449299  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:38.449329  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449565  123454 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:38.453922  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:38.467515  123454 kubeadm.go:877] updating cluster {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:38.467646  123454 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:17:38.467690  123454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:38.511057  123454 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:17:38.511093  123454 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:38.511189  123454 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.511221  123454 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0316 00:17:38.511240  123454 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.511253  123454 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.511305  123454 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.511335  123454 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.511338  123454 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.511188  123454 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.512934  123454 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.512949  123454 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.512953  123454 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0316 00:17:38.513014  123454 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.648129  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.650306  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.661334  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0316 00:17:38.666656  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.669280  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.684494  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.690813  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.760339  123454 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0316 00:17:38.760396  123454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.760449  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.760545  123454 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0316 00:17:38.760585  123454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.760641  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908463  123454 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0316 00:17:38.908491  123454 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0316 00:17:38.908515  123454 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.908525  123454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908579  123454 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0316 00:17:38.908607  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.908615  123454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.908585  123454 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908638  123454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.908739  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.954587  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.954611  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.954699  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.961857  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.961878  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0316 00:17:38.961979  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:38.962005  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.962010  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:39.052859  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.052888  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0316 00:17:39.052907  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.052958  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.052976  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.053001  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0316 00:17:39.052963  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.053055  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.053060  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0316 00:17:39.053100  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:39.053156  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.053235  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.120914  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.612614  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.779012  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:43.278631  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:41.133735  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.080597621s)
	I0316 00:17:41.133778  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0316 00:17:41.133890  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.080807025s)
	I0316 00:17:41.133924  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0316 00:17:41.133942  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.08085981s)
	I0316 00:17:41.133972  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133978  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.080988823s)
	I0316 00:17:41.133993  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133948  123454 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134011  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.080758975s)
	I0316 00:17:41.134031  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0316 00:17:41.134032  123454 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.01309054s)
	I0316 00:17:41.134060  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134083  123454 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0316 00:17:41.134110  123454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:41.134160  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:43.198894  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.064808781s)
	I0316 00:17:43.198926  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0316 00:17:43.198952  123454 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.198951  123454 ssh_runner.go:235] Completed: which crictl: (2.064761171s)
	I0316 00:17:43.199004  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.199051  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:43.112939  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.114446  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.613592  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.776235  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.777686  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.278307  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.110501  123454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.911421102s)
	I0316 00:17:47.110567  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0316 00:17:47.110695  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.911660704s)
	I0316 00:17:47.110728  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0316 00:17:47.110751  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:47.110703  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:47.110802  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:49.585079  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.474253503s)
	I0316 00:17:49.585109  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0316 00:17:49.585130  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.474308112s)
	I0316 00:17:49.585160  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0316 00:17:49.585134  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.585220  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.613704  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.615227  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:54.780467  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.736360  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.151102687s)
	I0316 00:17:51.736402  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0316 00:17:51.736463  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:51.736535  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:54.214591  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477993231s)
	I0316 00:17:54.214629  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0316 00:17:54.214658  123454 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:54.214728  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:55.171123  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0316 00:17:55.171204  123454 cache_images.go:123] Successfully loaded all cached images
	I0316 00:17:55.171213  123454 cache_images.go:92] duration metric: took 16.660103091s to LoadCachedImages
	I0316 00:17:55.171233  123454 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:17:55.171506  123454 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-238598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:55.171617  123454 ssh_runner.go:195] Run: crio config
	I0316 00:17:55.225056  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:17:55.225078  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:55.225089  123454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:55.225110  123454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-238598 NodeName:no-preload-238598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:17:55.225278  123454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-238598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:55.225371  123454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:17:55.237834  123454 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:55.237896  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:55.248733  123454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 00:17:55.266587  123454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:17:55.285283  123454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0316 00:17:55.303384  123454 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:55.307384  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:55.321079  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:55.453112  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:55.470573  123454 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598 for IP: 192.168.50.137
	I0316 00:17:55.470600  123454 certs.go:194] generating shared ca certs ...
	I0316 00:17:55.470623  123454 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:55.470808  123454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:55.470868  123454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:55.470906  123454 certs.go:256] generating profile certs ...
	I0316 00:17:55.471028  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.key
	I0316 00:17:55.471140  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key.0f2ae39d
	I0316 00:17:55.471195  123454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key
	I0316 00:17:55.471410  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:55.471463  123454 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:55.471483  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:55.471515  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:55.471542  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:55.471568  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:55.471612  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:55.472267  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:55.517524  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:54.115678  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:56.613196  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.277553  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:59.277770  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.567992  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:55.601463  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:55.637956  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:17:55.670063  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:55.694990  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:55.718916  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:17:55.744124  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:55.770051  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:55.794846  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:55.819060  123454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:55.836991  123454 ssh_runner.go:195] Run: openssl version
	I0316 00:17:55.844665  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:55.857643  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862493  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862561  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.868430  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:55.880551  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:55.891953  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896627  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896687  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.902539  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:55.915215  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:55.926699  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931120  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931172  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.936791  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:55.948180  123454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:55.953021  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:55.959107  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:55.965018  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:55.971159  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:55.977069  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:55.983062  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:55.989119  123454 kubeadm.go:391] StartCluster: {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:55.989201  123454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:55.989254  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.029128  123454 cri.go:89] found id: ""
	I0316 00:17:56.029209  123454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:56.040502  123454 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:56.040525  123454 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:56.040531  123454 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:56.040577  123454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:56.051843  123454 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:56.052995  123454 kubeconfig.go:125] found "no-preload-238598" server: "https://192.168.50.137:8443"
	I0316 00:17:56.055273  123454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:56.066493  123454 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0316 00:17:56.066547  123454 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:56.066564  123454 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:56.066641  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.111015  123454 cri.go:89] found id: ""
	I0316 00:17:56.111110  123454 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:56.131392  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:56.142638  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:56.142665  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:56.142725  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:56.154318  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:56.154418  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:56.166011  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:56.176688  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:56.176752  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:56.187776  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.198216  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:56.198285  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.208661  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:56.218587  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:56.218655  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:56.230247  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:56.241302  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:56.361423  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.731067  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.369591288s)
	I0316 00:17:57.731101  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.952457  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.044540  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.179796  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:58.179894  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.680635  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.180617  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.205383  123454 api_server.go:72] duration metric: took 1.025590775s to wait for apiserver process to appear ...
	I0316 00:17:59.205411  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:59.205436  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:59.205935  123454 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0316 00:17:59.706543  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:58.613340  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:00.618869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:01.914835  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.914865  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:01.914879  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:01.972138  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.972173  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:02.206540  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.219111  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.219165  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:02.705639  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.709820  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.709850  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:03.206513  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:03.216320  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:18:03.224237  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:18:03.224263  123454 api_server.go:131] duration metric: took 4.018845389s to wait for apiserver health ...
	I0316 00:18:03.224272  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:18:03.224279  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:18:03.225951  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.777309  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.777625  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.227382  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:18:03.245892  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:18:03.267423  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:18:03.281349  123454 system_pods.go:59] 8 kube-system pods found
	I0316 00:18:03.281387  123454 system_pods.go:61] "coredns-76f75df574-d2f6z" [3cd22981-0f83-4a60-9930-c103cfc2d2ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:18:03.281397  123454 system_pods.go:61] "etcd-no-preload-238598" [d98fa5b6-ad24-4c90-98c8-9e5b8f1a3250] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:18:03.281408  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [e7d7a5a0-9a4f-4df2-aaf7-44c36e5bd313] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:18:03.281420  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [a198865e-0ed5-40b6-8b10-a4fccdefa059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:18:03.281434  123454 system_pods.go:61] "kube-proxy-cjhzn" [6529873c-cb9d-42d8-991d-e450783b1707] Running
	I0316 00:18:03.281443  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [bfb373fb-ec78-4ef1-b92e-3a8af3f805a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:18:03.281457  123454 system_pods.go:61] "metrics-server-57f55c9bc5-hffvp" [4181fe7f-3e95-455b-a744-8f4dca7b870d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:18:03.281466  123454 system_pods.go:61] "storage-provisioner" [d568ae10-7b9c-4c98-8263-a09505227ac7] Running
	I0316 00:18:03.281485  123454 system_pods.go:74] duration metric: took 14.043103ms to wait for pod list to return data ...
	I0316 00:18:03.281501  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:18:03.284899  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:18:03.284923  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:18:03.284934  123454 node_conditions.go:105] duration metric: took 3.425812ms to run NodePressure ...
	I0316 00:18:03.284955  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:18:03.562930  123454 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568376  123454 kubeadm.go:733] kubelet initialised
	I0316 00:18:03.568402  123454 kubeadm.go:734] duration metric: took 5.44437ms waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568412  123454 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:18:03.574420  123454 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:03.113622  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.613724  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:07.614087  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.278238  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.776236  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.582284  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.081679  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.082343  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.113282  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.114515  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.776835  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.777258  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.778115  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.582099  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:13.082243  123454 pod_ready.go:92] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:13.082263  123454 pod_ready.go:81] duration metric: took 9.507817974s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:13.082271  123454 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:15.088733  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.613599  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:16.614876  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.280289  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.777434  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:17.089800  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.092413  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.092441  123454 pod_ready.go:81] duration metric: took 6.010161958s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.092453  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.097972  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.097996  123454 pod_ready.go:81] duration metric: took 5.533097ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.098008  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102186  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.102204  123454 pod_ready.go:81] duration metric: took 4.187939ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102213  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106692  123454 pod_ready.go:92] pod "kube-proxy-cjhzn" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.106712  123454 pod_ready.go:81] duration metric: took 4.492665ms for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106720  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111735  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.111754  123454 pod_ready.go:81] duration metric: took 5.027601ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111764  123454 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.113278  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.114061  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:22.276633  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:24.278807  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.119790  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.618664  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.115414  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.613572  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:26.778891  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:29.277585  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.619282  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.118484  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.121236  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.114043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.119153  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.614043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:31.778203  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.276424  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.618082  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.619339  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.614209  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.113521  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:36.279218  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:38.779161  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.118552  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.619543  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.614042  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.113784  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:41.278664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:43.777450  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.119118  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.119473  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.614102  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:47.112496  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:46.277664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.279095  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:46.619201  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.619302  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:49.113616  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.613449  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:18:50.777409  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:52.779497  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.278072  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.119041  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:53.121052  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:54.113699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:56.613686  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:57.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.277696  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.618835  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.118984  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.119379  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.614207  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:01.113795  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:02.281155  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.779663  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:02.618637  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.619492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:03.613777  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.114458  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:07.276601  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.277239  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.619784  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.118699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:08.613361  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.615062  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:11.277319  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.777280  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:11.119614  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.618997  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.113490  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:15.613530  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:17.613578  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.276204  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.277156  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.118717  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.618005  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:19.614161  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.112808  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:20.777843  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.778609  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.780571  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:20.618505  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.619290  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.118778  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.113901  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:26.115541  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:27.277159  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:29.277242  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:27.618996  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:30.118650  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:28.614101  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.114366  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:31.776661  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.778372  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:32.125130  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:34.619153  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.114785  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.116692  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:37.613605  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:36.276574  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.276784  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:36.619780  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.619966  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:39.614178  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.616246  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:40.779366  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.277656  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.279201  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.118560  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.120706  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:44.113022  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:46.114296  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.778494  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.277998  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.619070  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.622001  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.118739  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:48.114952  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.614794  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:52.776113  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.777687  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:52.119145  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.619675  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:53.113139  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:55.113961  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.613751  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.277412  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.277555  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.119685  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.618622  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.614914  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:02.113286  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:01.777542  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.278277  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:01.618756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.119973  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.113918  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.613434  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:06.278976  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.778022  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.124642  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.618968  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.613517  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.613699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.613997  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:11.277492  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.777429  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.619721  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.120185  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.114540  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:17.614281  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.781621  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.277078  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.277734  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.620224  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.118862  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.118920  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.117088  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.614917  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:22.779251  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.276842  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.118990  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:24.119699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.114563  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.614869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.277136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.777082  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:26.619354  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:28.619489  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.619807  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:32.117311  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:32.277582  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.778394  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.622010  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:33.119518  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:35.119736  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.613788  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.277007  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.776793  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:37.121196  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.619239  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:38.616664  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:41.112900  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:41.777952  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.276802  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:42.119128  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.119255  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:43.114941  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:45.614095  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:47.616615  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.277300  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.777275  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.119389  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.618309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.116327  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.614990  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:50.777563  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.778761  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.276863  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.619469  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:53.119593  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.116184  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:57.613355  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:57.776955  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.276381  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.619683  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:58.122772  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:59.616518  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.115379  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.613248  123537 pod_ready.go:81] duration metric: took 4m0.006848891s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:02.613273  123537 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:02.613280  123537 pod_ready.go:38] duration metric: took 4m5.267062496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:02.613297  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:02.613347  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:02.613393  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:02.670107  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:02.670139  123537 cri.go:89] found id: ""
	I0316 00:21:02.670149  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:02.670210  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.675144  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:02.675212  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:02.720695  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:02.720720  123537 cri.go:89] found id: ""
	I0316 00:21:02.720729  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:02.720790  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.725490  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:02.725570  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.276825  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.779811  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.617765  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.619210  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.619603  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.778908  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:02.778959  123537 cri.go:89] found id: ""
	I0316 00:21:02.778971  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:02.779028  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.784772  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:02.784864  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:02.830682  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:02.830709  123537 cri.go:89] found id: ""
	I0316 00:21:02.830719  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:02.830784  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.835733  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:02.835813  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:02.875862  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:02.875890  123537 cri.go:89] found id: ""
	I0316 00:21:02.875902  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:02.875967  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.880801  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:02.880857  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:02.921585  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:02.921611  123537 cri.go:89] found id: ""
	I0316 00:21:02.921622  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:02.921689  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.929521  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:02.929593  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.977621  123537 cri.go:89] found id: ""
	I0316 00:21:02.977646  123537 logs.go:276] 0 containers: []
	W0316 00:21:02.977657  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.977668  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:02.977723  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:03.020159  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.020186  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.020193  123537 cri.go:89] found id: ""
	I0316 00:21:03.020204  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:03.020274  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.025593  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.030718  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:03.030744  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:03.090141  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:03.090182  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:03.147416  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:03.147466  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:03.189686  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:03.189733  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:03.245980  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:03.246020  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.296494  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:03.296534  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:03.349602  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:03.349635  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:03.364783  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:03.364819  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:03.513917  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:03.513955  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:03.567916  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:03.567952  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:03.607620  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:03.607658  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:03.658683  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:03.658717  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.699797  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:03.699827  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:06.715440  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:06.733725  123537 api_server.go:72] duration metric: took 4m16.598062692s to wait for apiserver process to appear ...
	I0316 00:21:06.733759  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:06.733810  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:06.733868  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:06.775396  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:06.775431  123537 cri.go:89] found id: ""
	I0316 00:21:06.775442  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:06.775506  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.780448  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:06.780503  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:06.836927  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:06.836962  123537 cri.go:89] found id: ""
	I0316 00:21:06.836972  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:06.837025  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.841803  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:06.841869  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:06.887445  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:06.887470  123537 cri.go:89] found id: ""
	I0316 00:21:06.887479  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:06.887534  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.892112  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:06.892192  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:06.936614  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:06.936642  123537 cri.go:89] found id: ""
	I0316 00:21:06.936653  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:06.936717  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.943731  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:06.943799  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:06.986738  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:06.986764  123537 cri.go:89] found id: ""
	I0316 00:21:06.986774  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:06.986843  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.991555  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:06.991621  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:07.052047  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:07.052074  123537 cri.go:89] found id: ""
	I0316 00:21:07.052082  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:07.052133  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.057297  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:07.057358  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:07.104002  123537 cri.go:89] found id: ""
	I0316 00:21:07.104034  123537 logs.go:276] 0 containers: []
	W0316 00:21:07.104042  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:07.104049  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:07.104113  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:07.148540  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:07.148562  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:07.148566  123537 cri.go:89] found id: ""
	I0316 00:21:07.148572  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:07.148620  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.153502  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.157741  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:07.157770  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:07.197856  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:07.197889  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:07.654282  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:07.654324  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:07.708539  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:07.708579  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:07.725072  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:07.725104  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.277657  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.780721  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.121773  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.619756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.862465  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:07.862498  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:07.925812  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:07.925846  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:07.986121  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:07.986152  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:08.036774  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:08.036817  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:08.091902  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:08.091933  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:08.142096  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:08.142128  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:08.210747  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:08.210789  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:08.270225  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:08.270259  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:10.817112  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:21:10.822359  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:21:10.823955  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:10.823978  123537 api_server.go:131] duration metric: took 4.090210216s to wait for apiserver health ...
	I0316 00:21:10.823988  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:10.824019  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:10.824076  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:10.872487  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:10.872514  123537 cri.go:89] found id: ""
	I0316 00:21:10.872524  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:10.872590  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.877131  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:10.877197  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:10.916699  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:10.916728  123537 cri.go:89] found id: ""
	I0316 00:21:10.916737  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:10.916797  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.921114  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:10.921182  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:10.964099  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:10.964123  123537 cri.go:89] found id: ""
	I0316 00:21:10.964132  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:10.964191  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.968716  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:10.968788  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.008883  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.008909  123537 cri.go:89] found id: ""
	I0316 00:21:11.008919  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:11.008974  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.014068  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.014138  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.067209  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.067239  123537 cri.go:89] found id: ""
	I0316 00:21:11.067251  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:11.067315  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.072536  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.072663  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.119366  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.119399  123537 cri.go:89] found id: ""
	I0316 00:21:11.119411  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:11.119462  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.124502  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.124590  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.169458  123537 cri.go:89] found id: ""
	I0316 00:21:11.169494  123537 logs.go:276] 0 containers: []
	W0316 00:21:11.169505  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.169513  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:11.169576  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:11.218886  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:11.218923  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:11.218928  123537 cri.go:89] found id: ""
	I0316 00:21:11.218938  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:11.219002  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.223583  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.228729  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:11.228753  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:11.282781  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:11.282818  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:11.347330  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:11.347379  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.401191  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:11.401225  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.453126  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:11.453158  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.523058  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.523110  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.944108  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.944157  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:12.001558  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:12.001602  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:12.062833  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:12.062885  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:12.078726  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:12.078762  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:12.209248  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:12.209284  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:12.251891  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:12.251930  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:12.296240  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:12.296271  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:14.846244  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:14.846274  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.846279  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.846283  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.846287  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.846290  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.846294  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.846299  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.846302  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.846309  123537 system_pods.go:74] duration metric: took 4.022315588s to wait for pod list to return data ...
	I0316 00:21:14.846317  123537 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:14.848830  123537 default_sa.go:45] found service account: "default"
	I0316 00:21:14.848852  123537 default_sa.go:55] duration metric: took 2.529805ms for default service account to be created ...
	I0316 00:21:14.848859  123537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:14.861369  123537 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:14.861396  123537 system_pods.go:89] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.861401  123537 system_pods.go:89] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.861405  123537 system_pods.go:89] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.861409  123537 system_pods.go:89] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.861448  123537 system_pods.go:89] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.861456  123537 system_pods.go:89] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.861465  123537 system_pods.go:89] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.861470  123537 system_pods.go:89] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.861478  123537 system_pods.go:126] duration metric: took 12.614437ms to wait for k8s-apps to be running ...
	I0316 00:21:14.861488  123537 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:14.861534  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:14.879439  123537 system_svc.go:56] duration metric: took 17.934537ms WaitForService to wait for kubelet
	I0316 00:21:14.879484  123537 kubeadm.go:576] duration metric: took 4m24.743827748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:14.879523  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:14.882642  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:14.882673  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:14.882716  123537 node_conditions.go:105] duration metric: took 3.184841ms to run NodePressure ...
	I0316 00:21:14.882733  123537 start.go:240] waiting for startup goroutines ...
	I0316 00:21:14.882749  123537 start.go:245] waiting for cluster config update ...
	I0316 00:21:14.882789  123537 start.go:254] writing updated cluster config ...
	I0316 00:21:14.883119  123537 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:14.937804  123537 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:14.939886  123537 out.go:177] * Done! kubectl is now configured to use "embed-certs-666637" cluster and "default" namespace by default
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:12.278383  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.279769  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:12.124356  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.619164  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:16.777597  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.277188  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.119492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.119935  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.278136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:22.779721  123819 pod_ready.go:81] duration metric: took 4m0.010022344s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:22.779752  123819 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:22.779762  123819 pod_ready.go:38] duration metric: took 4m5.913207723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:22.779779  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:22.779814  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:22.779876  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:22.836022  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:22.836058  123819 cri.go:89] found id: ""
	I0316 00:21:22.836069  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:22.836131  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.841289  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:22.841362  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:22.883980  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:22.884007  123819 cri.go:89] found id: ""
	I0316 00:21:22.884018  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:22.884084  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.889352  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:22.889427  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:22.929947  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:22.929977  123819 cri.go:89] found id: ""
	I0316 00:21:22.929987  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:22.930033  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.935400  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:22.935485  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:22.975548  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:22.975580  123819 cri.go:89] found id: ""
	I0316 00:21:22.975598  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:22.975671  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.981916  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:22.981998  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.019925  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.019965  123819 cri.go:89] found id: ""
	I0316 00:21:23.019977  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:23.020046  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.024870  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.024960  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.068210  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.068241  123819 cri.go:89] found id: ""
	I0316 00:21:23.068253  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:23.068344  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.073492  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.073578  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.113267  123819 cri.go:89] found id: ""
	I0316 00:21:23.113301  123819 logs.go:276] 0 containers: []
	W0316 00:21:23.113311  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.113319  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:23.113382  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:23.160155  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:23.160175  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.160179  123819 cri.go:89] found id: ""
	I0316 00:21:23.160192  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:23.160241  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.165125  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.169508  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:23.169530  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.218749  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:23.218786  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.274140  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:23.274177  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.320515  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:23.320559  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:23.835119  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:23.835173  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:23.907635  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.907691  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.925071  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:23.925126  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:23.991996  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:23.992028  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:24.032865  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.032899  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.090947  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:24.090987  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:24.285862  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:24.285896  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:24.337983  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:24.338027  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:24.379626  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:24.379657  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:21.618894  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:24.122648  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:26.918844  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.938014  123819 api_server.go:72] duration metric: took 4m17.276244s to wait for apiserver process to appear ...
	I0316 00:21:26.938053  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:26.938095  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:26.938157  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:26.983515  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:26.983538  123819 cri.go:89] found id: ""
	I0316 00:21:26.983546  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:26.983595  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:26.989278  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:26.989341  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:27.039968  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.040000  123819 cri.go:89] found id: ""
	I0316 00:21:27.040009  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:27.040078  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.045617  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:27.045687  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:27.085920  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.085948  123819 cri.go:89] found id: ""
	I0316 00:21:27.085960  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:27.086029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.090911  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:27.090989  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:27.137289  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:27.137322  123819 cri.go:89] found id: ""
	I0316 00:21:27.137333  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:27.137393  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.141956  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:27.142031  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:27.180823  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.180845  123819 cri.go:89] found id: ""
	I0316 00:21:27.180854  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:27.180919  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.185439  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:27.185523  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:27.225775  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:27.225797  123819 cri.go:89] found id: ""
	I0316 00:21:27.225805  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:27.225854  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.230648  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:27.230717  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:27.269429  123819 cri.go:89] found id: ""
	I0316 00:21:27.269465  123819 logs.go:276] 0 containers: []
	W0316 00:21:27.269477  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:27.269485  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:27.269550  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:27.308288  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.308316  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.308321  123819 cri.go:89] found id: ""
	I0316 00:21:27.308329  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:27.308378  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.312944  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.317794  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:27.317829  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:27.364287  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:27.364323  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.419482  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:27.419521  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.468553  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:27.468585  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.513287  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:27.513320  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.561382  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:27.561426  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.601292  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:27.601325  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:27.656848  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:27.656902  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:27.796212  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:27.796245  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:28.246569  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:28.246611  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:28.302971  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:28.303015  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:28.359613  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:28.359645  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:28.375844  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:28.375877  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:26.124217  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:28.619599  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:30.921320  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:21:30.926064  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:21:30.927332  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:30.927353  123819 api_server.go:131] duration metric: took 3.989292523s to wait for apiserver health ...
	I0316 00:21:30.927361  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:30.927386  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:30.927438  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:30.975348  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:30.975376  123819 cri.go:89] found id: ""
	I0316 00:21:30.975389  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:30.975459  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:30.980128  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:30.980194  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:31.029534  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.029563  123819 cri.go:89] found id: ""
	I0316 00:21:31.029574  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:31.029627  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.034066  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:31.034149  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:31.073857  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.073884  123819 cri.go:89] found id: ""
	I0316 00:21:31.073892  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:31.073961  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.078421  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:31.078501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:31.117922  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.117951  123819 cri.go:89] found id: ""
	I0316 00:21:31.117964  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:31.118029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.122435  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:31.122501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:31.161059  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.161089  123819 cri.go:89] found id: ""
	I0316 00:21:31.161101  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:31.161155  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.165503  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:31.165572  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:31.207637  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.207669  123819 cri.go:89] found id: ""
	I0316 00:21:31.207679  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:31.207742  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.212296  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:31.212360  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:31.251480  123819 cri.go:89] found id: ""
	I0316 00:21:31.251519  123819 logs.go:276] 0 containers: []
	W0316 00:21:31.251530  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:31.251539  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:31.251608  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:31.296321  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.296345  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.296350  123819 cri.go:89] found id: ""
	I0316 00:21:31.296357  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:31.296414  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.302159  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.306501  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:31.306526  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.348347  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:31.348379  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.388542  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:31.388573  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:31.439926  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:31.439962  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:31.499674  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:31.499711  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:31.552720  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:31.552771  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.605281  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:31.605331  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.651964  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:31.651997  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.696113  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:31.696150  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.749712  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:31.749751  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.801476  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:31.801508  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:32.236105  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:32.236146  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:32.253815  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:32.253848  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:34.930730  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:34.930759  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.930763  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.930767  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.930772  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.930775  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.930778  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.930783  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.930788  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.930798  123819 system_pods.go:74] duration metric: took 4.003426137s to wait for pod list to return data ...
	I0316 00:21:34.930807  123819 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:34.933462  123819 default_sa.go:45] found service account: "default"
	I0316 00:21:34.933492  123819 default_sa.go:55] duration metric: took 2.674728ms for default service account to be created ...
	I0316 00:21:34.933500  123819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:34.939351  123819 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:34.939382  123819 system_pods.go:89] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.939393  123819 system_pods.go:89] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.939400  123819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.939406  123819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.939414  123819 system_pods.go:89] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.939420  123819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.939442  123819 system_pods.go:89] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.939454  123819 system_pods.go:89] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.939469  123819 system_pods.go:126] duration metric: took 5.962328ms to wait for k8s-apps to be running ...
	I0316 00:21:34.939482  123819 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:34.939539  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:34.958068  123819 system_svc.go:56] duration metric: took 18.572929ms WaitForService to wait for kubelet
	I0316 00:21:34.958108  123819 kubeadm.go:576] duration metric: took 4m25.296341727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:34.958130  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:34.962603  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:34.962629  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:34.962641  123819 node_conditions.go:105] duration metric: took 4.505615ms to run NodePressure ...
	I0316 00:21:34.962657  123819 start.go:240] waiting for startup goroutines ...
	I0316 00:21:34.962667  123819 start.go:245] waiting for cluster config update ...
	I0316 00:21:34.962690  123819 start.go:254] writing updated cluster config ...
	I0316 00:21:34.963009  123819 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:35.015774  123819 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:35.019103  123819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-313436" cluster and "default" namespace by default
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:21:31.121456  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:33.122437  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:35.618906  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:37.619223  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:40.120743  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:42.619309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:44.619544  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:47.120179  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:49.619419  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:52.124510  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:54.125147  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:56.621651  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:59.120895  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:01.618287  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:03.620297  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:06.119870  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:08.122618  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:10.619464  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.121381  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:15.619590  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.122483  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:19.112568  123454 pod_ready.go:81] duration metric: took 4m0.000767313s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	E0316 00:22:19.112600  123454 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0316 00:22:19.112621  123454 pod_ready.go:38] duration metric: took 4m15.544198169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:22:19.112652  123454 kubeadm.go:591] duration metric: took 4m23.072115667s to restartPrimaryControlPlane
	W0316 00:22:19.112713  123454 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:22:19.112769  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:51.249327  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.136527598s)
	I0316 00:22:51.249406  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:22:51.268404  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:22:51.280832  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:22:51.292639  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:22:51.292661  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:22:51.292712  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:22:51.303272  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:22:51.303347  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:22:51.313854  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:22:51.324290  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:22:51.324361  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:22:51.334879  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.345302  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:22:51.345382  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.355682  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:22:51.366601  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:22:51.366660  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:22:51.377336  123454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:22:51.594624  123454 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:00.473055  123454 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0316 00:23:00.473140  123454 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:00.473255  123454 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:00.473415  123454 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:00.473551  123454 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:00.473682  123454 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:00.475591  123454 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:00.475704  123454 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:00.475803  123454 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:00.475905  123454 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:00.476001  123454 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:00.476100  123454 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:00.476190  123454 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:00.476281  123454 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:00.476378  123454 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:00.476516  123454 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:00.476647  123454 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:00.476715  123454 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:00.476801  123454 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:00.476879  123454 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:00.476968  123454 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0316 00:23:00.477042  123454 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:00.477166  123454 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:00.477253  123454 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:00.477378  123454 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:00.477480  123454 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:00.479084  123454 out.go:204]   - Booting up control plane ...
	I0316 00:23:00.479206  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:00.479332  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:00.479440  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:00.479541  123454 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:00.479625  123454 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:00.479697  123454 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:00.479874  123454 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:23:00.479994  123454 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003092 seconds
	I0316 00:23:00.480139  123454 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:23:00.480339  123454 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:23:00.480445  123454 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:23:00.480687  123454 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-238598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:23:00.480789  123454 kubeadm.go:309] [bootstrap-token] Using token: aspuu8.i4yhgkjx7e43mgmn
	I0316 00:23:00.482437  123454 out.go:204]   - Configuring RBAC rules ...
	I0316 00:23:00.482568  123454 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:23:00.482697  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:23:00.482917  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:23:00.483119  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:23:00.483283  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:23:00.483406  123454 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:23:00.483582  123454 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:23:00.483653  123454 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:23:00.483714  123454 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:23:00.483720  123454 kubeadm.go:309] 
	I0316 00:23:00.483815  123454 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:23:00.483833  123454 kubeadm.go:309] 
	I0316 00:23:00.483973  123454 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:23:00.483986  123454 kubeadm.go:309] 
	I0316 00:23:00.484014  123454 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:23:00.484119  123454 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:23:00.484200  123454 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:23:00.484211  123454 kubeadm.go:309] 
	I0316 00:23:00.484283  123454 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:23:00.484288  123454 kubeadm.go:309] 
	I0316 00:23:00.484360  123454 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:23:00.484366  123454 kubeadm.go:309] 
	I0316 00:23:00.484452  123454 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:23:00.484560  123454 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:23:00.484657  123454 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:23:00.484666  123454 kubeadm.go:309] 
	I0316 00:23:00.484798  123454 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:23:00.484920  123454 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:23:00.484932  123454 kubeadm.go:309] 
	I0316 00:23:00.485053  123454 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485196  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:23:00.485227  123454 kubeadm.go:309] 	--control-plane 
	I0316 00:23:00.485241  123454 kubeadm.go:309] 
	I0316 00:23:00.485357  123454 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:23:00.485367  123454 kubeadm.go:309] 
	I0316 00:23:00.485488  123454 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485646  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:23:00.485661  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:23:00.485671  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:23:00.487417  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:23:00.489063  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:23:00.526147  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:23:00.571796  123454 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-238598 minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=no-preload-238598 minikube.k8s.io/primary=true
	I0316 00:23:00.892908  123454 ops.go:34] apiserver oom_adj: -16
	I0316 00:23:00.892994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.394077  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.893097  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.393114  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.893994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.393930  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.893428  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.393822  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.893810  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.393999  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.893998  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.393104  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.893725  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.393873  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.893432  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.394054  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.893595  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.393109  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.893621  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.393322  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.894024  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.393711  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.893465  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.393059  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.497890  123454 kubeadm.go:1107] duration metric: took 11.926069028s to wait for elevateKubeSystemPrivileges
	W0316 00:23:12.497951  123454 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:23:12.497962  123454 kubeadm.go:393] duration metric: took 5m16.508852945s to StartCluster
	I0316 00:23:12.497988  123454 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.498139  123454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:23:12.500632  123454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.500995  123454 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:23:12.502850  123454 out.go:177] * Verifying Kubernetes components...
	I0316 00:23:12.501089  123454 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:23:12.501233  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:23:12.504432  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:23:12.504443  123454 addons.go:69] Setting storage-provisioner=true in profile "no-preload-238598"
	I0316 00:23:12.504491  123454 addons.go:234] Setting addon storage-provisioner=true in "no-preload-238598"
	I0316 00:23:12.504502  123454 addons.go:69] Setting default-storageclass=true in profile "no-preload-238598"
	I0316 00:23:12.504515  123454 addons.go:69] Setting metrics-server=true in profile "no-preload-238598"
	I0316 00:23:12.504526  123454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-238598"
	I0316 00:23:12.504541  123454 addons.go:234] Setting addon metrics-server=true in "no-preload-238598"
	W0316 00:23:12.504551  123454 addons.go:243] addon metrics-server should already be in state true
	I0316 00:23:12.504582  123454 host.go:66] Checking if "no-preload-238598" exists ...
	W0316 00:23:12.504505  123454 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:23:12.504656  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.504996  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505012  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.505013  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505229  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.521634  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0316 00:23:12.521698  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0316 00:23:12.522283  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522377  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522836  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.522861  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.522990  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.523032  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.523203  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523375  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523737  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.523758  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524232  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.524277  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524695  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0316 00:23:12.525112  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.525610  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.525637  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.526025  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.526218  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.530010  123454 addons.go:234] Setting addon default-storageclass=true in "no-preload-238598"
	W0316 00:23:12.530029  123454 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:23:12.530053  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.530277  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.530315  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.540310  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0316 00:23:12.545850  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.545966  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0316 00:23:12.546335  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.546740  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.546761  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.547035  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.547232  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.548605  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.548626  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.549001  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.549058  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0316 00:23:12.549268  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.549323  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.549454  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.551419  123454 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:23:12.549975  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.551115  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.553027  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:23:12.553050  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:23:12.553074  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.553082  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.554948  123454 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:23:12.553404  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.556096  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556544  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.556568  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556640  123454 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.556660  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:23:12.556679  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.556769  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.557150  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.557176  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.557398  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.557600  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.557886  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.560220  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560555  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.560582  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560759  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.560982  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.561157  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.561318  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.574877  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0316 00:23:12.575802  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.576313  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.576337  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.576640  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.577015  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.578483  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.578814  123454 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.578835  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:23:12.578856  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.581832  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582439  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.582454  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.582465  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582635  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.582819  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.582969  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.729051  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:23:12.747162  123454 node_ready.go:35] waiting up to 6m0s for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.759957  123454 node_ready.go:49] node "no-preload-238598" has status "Ready":"True"
	I0316 00:23:12.759992  123454 node_ready.go:38] duration metric: took 12.79378ms for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.760006  123454 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.772201  123454 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795626  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.795660  123454 pod_ready.go:81] duration metric: took 23.429082ms for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795674  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808661  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.808688  123454 pod_ready.go:81] duration metric: took 13.006568ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808699  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821578  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.821613  123454 pod_ready.go:81] duration metric: took 12.904651ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821627  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.832585  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:23:12.832616  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:23:12.838375  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.838404  123454 pod_ready.go:81] duration metric: took 16.768452ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.838415  123454 pod_ready.go:38] duration metric: took 78.396172ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.838435  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:23:12.838522  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:23:12.889063  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.907225  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.924533  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:23:12.924565  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:23:12.947224  123454 api_server.go:72] duration metric: took 446.183679ms to wait for apiserver process to appear ...
	I0316 00:23:12.947257  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:23:12.947281  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:23:12.975463  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:12.975495  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:23:13.023702  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:23:13.039598  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:23:13.039638  123454 api_server.go:131] duration metric: took 92.372403ms to wait for apiserver health ...
	I0316 00:23:13.039649  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:23:13.069937  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:13.141358  123454 system_pods.go:59] 5 kube-system pods found
	I0316 00:23:13.141387  123454 system_pods.go:61] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.141391  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.141397  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.141400  123454 system_pods.go:61] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending
	I0316 00:23:13.141404  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.141411  123454 system_pods.go:74] duration metric: took 101.754765ms to wait for pod list to return data ...
	I0316 00:23:13.141419  123454 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:23:13.200153  123454 default_sa.go:45] found service account: "default"
	I0316 00:23:13.200193  123454 default_sa.go:55] duration metric: took 58.765381ms for default service account to be created ...
	I0316 00:23:13.200205  123454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:23:13.381398  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381431  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.381771  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.381825  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.381840  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.381849  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381862  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.382154  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.382159  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.382189  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.383303  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.383345  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.383353  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending
	I0316 00:23:13.383360  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.383368  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.383374  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.383384  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.383396  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.383440  123454 retry.go:31] will retry after 221.286986ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.408809  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.408839  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.409146  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.409191  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.409195  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.612171  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.612205  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612212  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612221  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.612226  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.612230  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.612236  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.612239  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.612260  123454 retry.go:31] will retry after 311.442515ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.934136  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.934170  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934177  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934185  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.934191  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.934197  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.934204  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.934210  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.934234  123454 retry.go:31] will retry after 453.147474ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.343055  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.435784176s)
	I0316 00:23:14.343123  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343139  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343497  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343523  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.343540  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343554  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343800  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.343876  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343895  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.404681  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.404725  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404738  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404748  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.404758  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.404767  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.404777  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.404790  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.404810  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.404821  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending
	I0316 00:23:14.404846  123454 retry.go:31] will retry after 464.575803ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.447649  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.377663696s)
	I0316 00:23:14.447706  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.447724  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448062  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448083  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448092  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.448100  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448367  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.448367  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448394  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448407  123454 addons.go:470] Verifying addon metrics-server=true in "no-preload-238598"
	I0316 00:23:14.450675  123454 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0316 00:23:14.452378  123454 addons.go:505] duration metric: took 1.951301533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0316 00:23:14.888167  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.888206  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:14.888219  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.888226  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.888236  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.888243  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.888252  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.888260  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.888292  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.888301  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:14.888325  123454 retry.go:31] will retry after 490.515879ms: missing components: kube-proxy
	I0316 00:23:15.389667  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:15.389694  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:15.389700  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Running
	I0316 00:23:15.389704  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:15.389708  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:15.389712  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:15.389716  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Running
	I0316 00:23:15.389721  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:15.389728  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:15.389735  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:15.389745  123454 system_pods.go:126] duration metric: took 2.189532563s to wait for k8s-apps to be running ...
	I0316 00:23:15.389757  123454 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:23:15.389805  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:15.409241  123454 system_svc.go:56] duration metric: took 19.469575ms WaitForService to wait for kubelet
	I0316 00:23:15.409273  123454 kubeadm.go:576] duration metric: took 2.908240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:23:15.409292  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:23:15.412530  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:23:15.412559  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:23:15.412570  123454 node_conditions.go:105] duration metric: took 3.272979ms to run NodePressure ...
	I0316 00:23:15.412585  123454 start.go:240] waiting for startup goroutines ...
	I0316 00:23:15.412594  123454 start.go:245] waiting for cluster config update ...
	I0316 00:23:15.412608  123454 start.go:254] writing updated cluster config ...
	I0316 00:23:15.412923  123454 ssh_runner.go:195] Run: rm -f paused
	I0316 00:23:15.468245  123454 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 00:23:15.470311  123454 out.go:177] * Done! kubectl is now configured to use "no-preload-238598" cluster and "default" namespace by default
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 
	
	
	==> CRI-O <==
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.030205892Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&PodSandboxMetadata{Name:busybox,Uid:b494922e-2643-471f-bcda-1510733942e8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548215649772259,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:16:47.784747040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-t8xb4,Uid:e9feb9bc-2a4a-402b-9753-f2f84702db9c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548215648964
870,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:16:47.784737642Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ae5e796d6df21202e29455b54a2b374977a32d2d35777d03825259ee8d8ef954,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-bfnwf,Uid:de35c1e5-3847-4a31-a31a-86aeed12252c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548213844819749,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-bfnwf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de35c1e5-3847-4a31-a31a-86aeed12252c,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:16:47.
784753426Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&PodSandboxMetadata{Name:kube-proxy-8fpc5,Uid:a0d4bdc4-4f17-4b6a-8958-cecd1884016e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548208105905084,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8958-cecd1884016e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:16:47.784745062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:d503e849-8714-402d-aeef-26cd0f4aff39,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548208091968954,Labels:map[string
]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.i
o/config.seen: 2024-03-16T00:16:47.784754358Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&PodSandboxMetadata{Name:kube-scheduler-embed-certs-666637,Uid:4b35f67b5d7b32782627020932ee59d3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548203295935841,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4b35f67b5d7b32782627020932ee59d3,kubernetes.io/config.seen: 2024-03-16T00:16:42.775996002Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-666637,Uid:e1d485739ac46b8bf5f2eddb92efc69d,Namespace:kube-sy
stem,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548203279419777,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.91:2379,kubernetes.io/config.hash: e1d485739ac46b8bf5f2eddb92efc69d,kubernetes.io/config.seen: 2024-03-16T00:16:42.920583841Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-666637,Uid:8284e42cc130cc7e3b8b526d35eab878,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548203271039405,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-6666
37,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8284e42cc130cc7e3b8b526d35eab878,kubernetes.io/config.seen: 2024-03-16T00:16:42.775995055Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-666637,Uid:359f13fb608a64bccba28eae61bdee13,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548203265721102,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.91:8443,kubernetes.io/config.hash: 359f13fb608a64bccba28eae61bd
ee13,kubernetes.io/config.seen: 2024-03-16T00:16:42.775990462Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=4b292f52-13e5-4b69-a4b1-4359ce5d98c5 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.033185207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00f1ed4d-18f0-4533-b952-775201ad9b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.033399575Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00f1ed4d-18f0-4533-b952-775201ad9b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.034115487Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00f1ed4d-18f0-4533-b952-775201ad9b9a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.062350612Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9669b4ae-66a7-4cd6-aec4-41c28489078a name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.062425588Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9669b4ae-66a7-4cd6-aec4-41c28489078a name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.063940546Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a914763-7a57-4457-b2bd-e8d99a5e8dc5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.064683254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549017064658072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a914763-7a57-4457-b2bd-e8d99a5e8dc5 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.065262655Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=54d68f80-7d33-4897-a9b9-34019b10cd86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.065321426Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=54d68f80-7d33-4897-a9b9-34019b10cd86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.065568873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=54d68f80-7d33-4897-a9b9-34019b10cd86 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.109410184Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=70403748-6d08-47bc-94da-4409bd009e91 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.109574132Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=70403748-6d08-47bc-94da-4409bd009e91 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.110941607Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=28b8e0c4-8b98-49f1-b53f-f71271e8ea0d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.111334423Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549017111312995,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=28b8e0c4-8b98-49f1-b53f-f71271e8ea0d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.111820628Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e1cfdb8-c5d7-491b-8d1c-bbde8cc13baa name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.111871664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e1cfdb8-c5d7-491b-8d1c-bbde8cc13baa name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.112090471Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e1cfdb8-c5d7-491b-8d1c-bbde8cc13baa name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.148124444Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d74dae5-494e-455c-a542-b744f173fb3a name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.148198564Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d74dae5-494e-455c-a542-b744f173fb3a name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.149161148Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b8d7db8e-126c-453d-a1af-1c3f952633da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.150132920Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549017150105291,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b8d7db8e-126c-453d-a1af-1c3f952633da name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.150908033Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e9a7e87-1dc5-4d12-b7b0-62a1d1c9eab2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.150959924Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e9a7e87-1dc5-4d12-b7b0-62a1d1c9eab2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:17 embed-certs-666637 crio[689]: time="2024-03-16 00:30:17.151160579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e9a7e87-1dc5-4d12-b7b0-62a1d1c9eab2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	413fba3fe664b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   13e45a234e609       storage-provisioner
	d587ebe2db0fa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   44db51216f85d       busybox
	4e6f75410b4de       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   491918599a8b4       coredns-5dd5756b68-t8xb4
	0947f6f374016       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   e51471be8a4d8       kube-proxy-8fpc5
	ea3eb17a8a72d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   13e45a234e609       storage-provisioner
	4909a6f121b0c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   bed149cfa5222       kube-scheduler-embed-certs-666637
	229fef1811744       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   b52f4be6b8a49       etcd-embed-certs-666637
	9041a3c9211cc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   262819213ca04       kube-controller-manager-embed-certs-666637
	81025ff5aef08       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   5f1799ffd005b       kube-apiserver-embed-certs-666637
	
	
	==> coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50814 - 16051 "HINFO IN 7249487384717712784.1440845366661011137. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022155485s
	
	
	==> describe nodes <==
	Name:               embed-certs-666637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-666637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=embed-certs-666637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_08_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:08:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-666637
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:30:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:27:31 +0000   Sat, 16 Mar 2024 00:08:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:27:31 +0000   Sat, 16 Mar 2024 00:08:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:27:31 +0000   Sat, 16 Mar 2024 00:08:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:27:31 +0000   Sat, 16 Mar 2024 00:16:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.91
	  Hostname:    embed-certs-666637
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e251e5a35f1548d18587fe3724a1b0f6
	  System UUID:                e251e5a3-5f15-48d1-8587-fe3724a1b0f6
	  Boot ID:                    78240f16-f223-4c62-a053-d4b16932ca9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 coredns-5dd5756b68-t8xb4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	  kube-system                 etcd-embed-certs-666637                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 kube-apiserver-embed-certs-666637             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-controller-manager-embed-certs-666637    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 kube-proxy-8fpc5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-embed-certs-666637             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 metrics-server-57f55c9bc5-bfnwf               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    22m                kubelet          Node embed-certs-666637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m                kubelet          Node embed-certs-666637 status is now: NodeHasSufficientPID
	  Normal  NodeReady                22m                kubelet          Node embed-certs-666637 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  22m                kubelet          Node embed-certs-666637 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           21m                node-controller  Node embed-certs-666637 event: Registered Node embed-certs-666637 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node embed-certs-666637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node embed-certs-666637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node embed-certs-666637 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-666637 event: Registered Node embed-certs-666637 in Controller
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051908] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040879] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.487567] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.790399] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.461323] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.510268] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.058399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067639] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.190466] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.139213] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.244320] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.034021] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.061079] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.752537] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +5.606386] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.005541] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +3.661010] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.728664] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] <==
	{"level":"warn","ts":"2024-03-16T00:16:59.94476Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.909788ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" ","response":"range_response_count:1 size:6299"}
	{"level":"info","ts":"2024-03-16T00:16:59.944862Z","caller":"traceutil/trace.go:171","msg":"trace[2090835145] range","detail":"{range_begin:/registry/pods/kube-system/kube-controller-manager-embed-certs-666637; range_end:; response_count:1; response_revision:596; }","duration":"104.0567ms","start":"2024-03-16T00:16:59.84079Z","end":"2024-03-16T00:16:59.944847Z","steps":["trace[2090835145] 'range keys from in-memory index tree'  (duration: 103.808524ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:00.197269Z","caller":"traceutil/trace.go:171","msg":"trace[116029348] transaction","detail":"{read_only:false; response_revision:597; number_of_response:1; }","duration":"241.200698ms","start":"2024-03-16T00:16:59.956051Z","end":"2024-03-16T00:17:00.197252Z","steps":["trace[116029348] 'process raft request'  (duration: 241.05026ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:00.9412Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.009872ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5807266669881026152 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" mod_revision:597 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" value_size:5998 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-16T00:17:00.941293Z","caller":"traceutil/trace.go:171","msg":"trace[2143388295] linearizableReadLoop","detail":"{readStateIndex:639; appliedIndex:638; }","duration":"573.397668ms","start":"2024-03-16T00:17:00.367885Z","end":"2024-03-16T00:17:00.941283Z","steps":["trace[2143388295] 'read index received'  (duration: 449.701927ms)","trace[2143388295] 'applied index is now lower than readState.Index'  (duration: 123.694333ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:17:00.94135Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"573.500688ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" ","response":"range_response_count:1 size:5363"}
	{"level":"info","ts":"2024-03-16T00:17:00.941366Z","caller":"traceutil/trace.go:171","msg":"trace[1511233561] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-666637; range_end:; response_count:1; response_revision:598; }","duration":"573.522395ms","start":"2024-03-16T00:17:00.367839Z","end":"2024-03-16T00:17:00.941361Z","steps":["trace[1511233561] 'agreement among raft nodes before linearized reading'  (duration: 573.475211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:00.941385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:00.367825Z","time spent":"573.555369ms","remote":"127.0.0.1:51954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":5386,"request content":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" "}
	{"level":"info","ts":"2024-03-16T00:17:00.941436Z","caller":"traceutil/trace.go:171","msg":"trace[664336527] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"726.299656ms","start":"2024-03-16T00:17:00.21513Z","end":"2024-03-16T00:17:00.94143Z","steps":["trace[664336527] 'process raft request'  (duration: 602.598693ms)","trace[664336527] 'compare'  (duration: 122.710935ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:17:00.941576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:00.215119Z","time spent":"726.426604ms","remote":"127.0.0.1:51954","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6075,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" mod_revision:597 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" value_size:5998 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" > >"}
	{"level":"warn","ts":"2024-03-16T00:17:01.985954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"618.258149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" ","response":"range_response_count:1 size:5363"}
	{"level":"info","ts":"2024-03-16T00:17:01.986025Z","caller":"traceutil/trace.go:171","msg":"trace[678499935] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-666637; range_end:; response_count:1; response_revision:598; }","duration":"618.336629ms","start":"2024-03-16T00:17:01.367677Z","end":"2024-03-16T00:17:01.986014Z","steps":["trace[678499935] 'range keys from in-memory index tree'  (duration: 618.177763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:01.986059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:01.367663Z","time spent":"618.384302ms","remote":"127.0.0.1:51954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":5386,"request content":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" "}
	{"level":"info","ts":"2024-03-16T00:17:20.272748Z","caller":"traceutil/trace.go:171","msg":"trace[1937797935] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:660; }","duration":"172.917714ms","start":"2024-03-16T00:17:20.099808Z","end":"2024-03-16T00:17:20.272726Z","steps":["trace[1937797935] 'read index received'  (duration: 172.573172ms)","trace[1937797935] 'applied index is now lower than readState.Index'  (duration: 343.738µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:17:20.27289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.080165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf\" ","response":"range_response_count:1 size:4026"}
	{"level":"info","ts":"2024-03-16T00:17:20.272913Z","caller":"traceutil/trace.go:171","msg":"trace[1764960300] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf; range_end:; response_count:1; response_revision:615; }","duration":"173.130296ms","start":"2024-03-16T00:17:20.099776Z","end":"2024-03-16T00:17:20.272907Z","steps":["trace[1764960300] 'agreement among raft nodes before linearized reading'  (duration: 173.033008ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:20.273099Z","caller":"traceutil/trace.go:171","msg":"trace[584271795] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"229.069778ms","start":"2024-03-16T00:17:20.043979Z","end":"2024-03-16T00:17:20.273048Z","steps":["trace[584271795] 'process raft request'  (duration: 228.545251ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:20.789545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.552539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf\" ","response":"range_response_count:1 size:4026"}
	{"level":"info","ts":"2024-03-16T00:17:20.789688Z","caller":"traceutil/trace.go:171","msg":"trace[860486802] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf; range_end:; response_count:1; response_revision:615; }","duration":"189.828696ms","start":"2024-03-16T00:17:20.599845Z","end":"2024-03-16T00:17:20.789674Z","steps":["trace[860486802] 'range keys from in-memory index tree'  (duration: 189.396447ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:22.100797Z","caller":"traceutil/trace.go:171","msg":"trace[1979416583] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"258.133652ms","start":"2024-03-16T00:17:21.842642Z","end":"2024-03-16T00:17:22.100776Z","steps":["trace[1979416583] 'process raft request'  (duration: 257.952982ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:22.107271Z","caller":"traceutil/trace.go:171","msg":"trace[159985000] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"263.966387ms","start":"2024-03-16T00:17:21.843288Z","end":"2024-03-16T00:17:22.107254Z","steps":["trace[159985000] 'process raft request'  (duration: 263.797546ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:22.227201Z","caller":"traceutil/trace.go:171","msg":"trace[498684736] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"109.225336ms","start":"2024-03-16T00:17:22.117954Z","end":"2024-03-16T00:17:22.227179Z","steps":["trace[498684736] 'process raft request'  (duration: 103.747843ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:26:45.614258Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":842}
	{"level":"info","ts":"2024-03-16T00:26:45.619267Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":842,"took":"3.100012ms","hash":1522911222}
	{"level":"info","ts":"2024-03-16T00:26:45.619551Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1522911222,"revision":842,"compact-revision":-1}
	
	
	==> kernel <==
	 00:30:17 up 13 min,  0 users,  load average: 0.24, 0.19, 0.09
	Linux embed-certs-666637 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] <==
	I0316 00:26:46.991046       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:26:47.991589       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:26:47.991649       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:26:47.991658       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:26:47.991783       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:26:47.991880       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:26:47.993123       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:27:46.885616       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:27:47.992316       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:47.992497       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:27:47.992590       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:27:47.993502       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:47.993651       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:27:47.993690       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:28:46.885898       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0316 00:29:46.885610       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:29:47.993770       1 handler_proxy.go:93] no RequestInfo found in the context
	W0316 00:29:47.993794       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:29:47.994066       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:29:47.994097       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0316 00:29:47.994066       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:29:47.996048       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] <==
	I0316 00:24:30.226921       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:24:59.747815       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:25:00.237791       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:25:29.754587       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:25:30.245310       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:25:59.760631       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:26:00.253709       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:26:29.765931       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:26:30.262347       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:26:59.778593       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:27:00.269698       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:27:29.784785       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:27:30.278257       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:27:59.790719       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:28:00.287021       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:28:05.854801       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="391.731µs"
	I0316 00:28:16.850963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="202.482µs"
	E0316 00:28:29.797249       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:28:30.294870       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:28:59.805607       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:29:00.303148       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:29:29.811625       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:29:30.316106       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:29:59.817527       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:30:00.326100       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] <==
	I0316 00:16:48.415438       1 server_others.go:69] "Using iptables proxy"
	I0316 00:16:48.425762       1 node.go:141] Successfully retrieved node IP: 192.168.61.91
	I0316 00:16:48.471226       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:16:48.471247       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:16:48.477097       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:16:48.477152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:16:48.477359       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:16:48.477369       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:16:48.478028       1 config.go:188] "Starting service config controller"
	I0316 00:16:48.478044       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:16:48.478076       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:16:48.478079       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:16:48.478761       1 config.go:315] "Starting node config controller"
	I0316 00:16:48.478770       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:16:48.579558       1 shared_informer.go:318] Caches are synced for node config
	I0316 00:16:48.579590       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:16:48.579728       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] <==
	I0316 00:16:44.669899       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:16:46.966169       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:16:46.966811       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:16:46.966933       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:16:46.966961       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:16:46.991596       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:16:46.991683       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:16:46.993140       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:16:46.993244       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:16:47.003189       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:16:47.003232       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:16:47.093905       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:27:54 embed-certs-666637 kubelet[903]: E0316 00:27:54.881512     903 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 16 00:27:54 embed-certs-666637 kubelet[903]: E0316 00:27:54.881586     903 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 16 00:27:54 embed-certs-666637 kubelet[903]: E0316 00:27:54.881886     903 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rj5pn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-bfnwf_kube-system(de35c1e5-3847-4a31-a31a-86aeed12252c): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 16 00:27:54 embed-certs-666637 kubelet[903]: E0316 00:27:54.881924     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:28:05 embed-certs-666637 kubelet[903]: E0316 00:28:05.833349     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:28:16 embed-certs-666637 kubelet[903]: E0316 00:28:16.834253     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:28:29 embed-certs-666637 kubelet[903]: E0316 00:28:29.833897     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:28:41 embed-certs-666637 kubelet[903]: E0316 00:28:41.833993     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:28:42 embed-certs-666637 kubelet[903]: E0316 00:28:42.855144     903 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:28:42 embed-certs-666637 kubelet[903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:28:42 embed-certs-666637 kubelet[903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:28:42 embed-certs-666637 kubelet[903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:28:42 embed-certs-666637 kubelet[903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:28:54 embed-certs-666637 kubelet[903]: E0316 00:28:54.833243     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:29:08 embed-certs-666637 kubelet[903]: E0316 00:29:08.834280     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:29:22 embed-certs-666637 kubelet[903]: E0316 00:29:22.836038     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:29:33 embed-certs-666637 kubelet[903]: E0316 00:29:33.833987     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:29:42 embed-certs-666637 kubelet[903]: E0316 00:29:42.855341     903 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:29:42 embed-certs-666637 kubelet[903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:29:42 embed-certs-666637 kubelet[903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:29:42 embed-certs-666637 kubelet[903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:29:42 embed-certs-666637 kubelet[903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:29:45 embed-certs-666637 kubelet[903]: E0316 00:29:45.833854     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:30:00 embed-certs-666637 kubelet[903]: E0316 00:30:00.833639     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:30:13 embed-certs-666637 kubelet[903]: E0316 00:30:13.834230     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	
	
	==> storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] <==
	I0316 00:17:19.161354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:17:19.175015       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:17:19.175128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:17:36.578324       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:17:36.578751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-666637_eb402c7e-4eec-4a68-8bd2-89381fd513f2!
	I0316 00:17:36.582960       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70064dba-4c34-4434-8ff6-cae9b56858b1", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-666637_eb402c7e-4eec-4a68-8bd2-89381fd513f2 became leader
	I0316 00:17:36.679978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-666637_eb402c7e-4eec-4a68-8bd2-89381fd513f2!
	
	
	==> storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] <==
	I0316 00:16:48.352111       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0316 00:17:18.354955       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-666637 -n embed-certs-666637
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-666637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bfnwf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-666637 describe pod metrics-server-57f55c9bc5-bfnwf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-666637 describe pod metrics-server-57f55c9bc5-bfnwf: exit status 1 (68.128266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bfnwf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-666637 describe pod metrics-server-57f55c9bc5-bfnwf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-16 00:30:35.611275638 +0000 UTC m=+5666.034126905
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-313436 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-313436 logs -n 25: (2.103800852s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-313368 ssh                                | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:13:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:00.891560  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:13:06.971548  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:10.043616  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:16.123615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:19.195641  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:25.275569  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:28.347627  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:34.427628  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:37.499621  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:43.579636  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:46.651611  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:52.731602  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:55.803555  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:01.883545  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:04.955579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:11.035610  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:14.107615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:20.187606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:23.259572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:29.339575  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:32.411617  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:38.491587  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:41.563659  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:47.643582  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:50.715565  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:56.795596  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:59.867614  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:05.947572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:09.019585  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:15.099606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:18.171563  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:24.251589  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:27.323592  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:33.403599  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:36.475652  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:42.555600  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:45.627577  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:51.707630  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:54.779625  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:00.859579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:03.931626  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:10.011762  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:13.083615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:16.087122  123537 start.go:364] duration metric: took 4m28.254030119s to acquireMachinesLock for "embed-certs-666637"
	I0316 00:16:16.087211  123537 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:16.087224  123537 fix.go:54] fixHost starting: 
	I0316 00:16:16.087613  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:16.087653  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:16.102371  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0316 00:16:16.102813  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:16.103305  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:16.103343  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:16.103693  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:16.103874  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:16.104010  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:16.105752  123537 fix.go:112] recreateIfNeeded on embed-certs-666637: state=Stopped err=<nil>
	I0316 00:16:16.105780  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	W0316 00:16:16.105959  123537 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:16.107881  123537 out.go:177] * Restarting existing kvm2 VM for "embed-certs-666637" ...
	I0316 00:16:16.109056  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Start
	I0316 00:16:16.109231  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring networks are active...
	I0316 00:16:16.110036  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network default is active
	I0316 00:16:16.110372  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network mk-embed-certs-666637 is active
	I0316 00:16:16.110782  123537 main.go:141] libmachine: (embed-certs-666637) Getting domain xml...
	I0316 00:16:16.111608  123537 main.go:141] libmachine: (embed-certs-666637) Creating domain...
	I0316 00:16:17.296901  123537 main.go:141] libmachine: (embed-certs-666637) Waiting to get IP...
	I0316 00:16:17.297746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.298129  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.298317  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.298111  124543 retry.go:31] will retry after 269.98852ms: waiting for machine to come up
	I0316 00:16:17.569866  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.570322  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.570349  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.570278  124543 retry.go:31] will retry after 244.711835ms: waiting for machine to come up
	I0316 00:16:16.084301  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:16.084359  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084699  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:16:16.084726  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084970  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:16:16.086868  123454 machine.go:97] duration metric: took 4m35.39093995s to provisionDockerMachine
	I0316 00:16:16.087007  123454 fix.go:56] duration metric: took 4m35.413006758s for fixHost
	I0316 00:16:16.087038  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 4m35.413320023s
	W0316 00:16:16.087068  123454 start.go:713] error starting host: provision: host is not running
	W0316 00:16:16.087236  123454 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0316 00:16:16.087249  123454 start.go:728] Will try again in 5 seconds ...
	I0316 00:16:17.816747  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.817165  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.817196  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.817109  124543 retry.go:31] will retry after 326.155242ms: waiting for machine to come up
	I0316 00:16:18.144611  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.145047  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.145081  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.145000  124543 retry.go:31] will retry after 464.805158ms: waiting for machine to come up
	I0316 00:16:18.611746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.612105  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.612140  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.612039  124543 retry.go:31] will retry after 593.718495ms: waiting for machine to come up
	I0316 00:16:19.208024  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.208444  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.208476  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.208379  124543 retry.go:31] will retry after 772.07702ms: waiting for machine to come up
	I0316 00:16:19.982326  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.982800  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.982827  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.982706  124543 retry.go:31] will retry after 846.887476ms: waiting for machine to come up
	I0316 00:16:20.830726  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:20.831144  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:20.831168  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:20.831098  124543 retry.go:31] will retry after 1.274824907s: waiting for machine to come up
	I0316 00:16:22.107855  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:22.108252  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:22.108278  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:22.108209  124543 retry.go:31] will retry after 1.41217789s: waiting for machine to come up
	I0316 00:16:21.088013  123454 start.go:360] acquireMachinesLock for no-preload-238598: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:23.522725  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:23.523143  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:23.523179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:23.523094  124543 retry.go:31] will retry after 1.567285216s: waiting for machine to come up
	I0316 00:16:25.092539  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:25.092954  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:25.092981  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:25.092941  124543 retry.go:31] will retry after 2.260428679s: waiting for machine to come up
	I0316 00:16:27.354650  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:27.355051  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:27.355082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:27.354990  124543 retry.go:31] will retry after 2.402464465s: waiting for machine to come up
	I0316 00:16:29.758774  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:29.759220  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:29.759253  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:29.759176  124543 retry.go:31] will retry after 3.63505234s: waiting for machine to come up
	I0316 00:16:34.648552  123819 start.go:364] duration metric: took 4m4.062008179s to acquireMachinesLock for "default-k8s-diff-port-313436"
	I0316 00:16:34.648628  123819 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:34.648638  123819 fix.go:54] fixHost starting: 
	I0316 00:16:34.649089  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:34.649134  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:34.667801  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0316 00:16:34.668234  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:34.668737  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:16:34.668768  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:34.669123  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:34.669349  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:34.669552  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:16:34.671100  123819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313436: state=Stopped err=<nil>
	I0316 00:16:34.671139  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	W0316 00:16:34.671297  123819 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:34.673738  123819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-313436" ...
	I0316 00:16:34.675120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Start
	I0316 00:16:34.675292  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring networks are active...
	I0316 00:16:34.676038  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network default is active
	I0316 00:16:34.676427  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network mk-default-k8s-diff-port-313436 is active
	I0316 00:16:34.676855  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Getting domain xml...
	I0316 00:16:34.677501  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Creating domain...
	I0316 00:16:33.397686  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398274  123537 main.go:141] libmachine: (embed-certs-666637) Found IP for machine: 192.168.61.91
	I0316 00:16:33.398301  123537 main.go:141] libmachine: (embed-certs-666637) Reserving static IP address...
	I0316 00:16:33.398319  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has current primary IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398829  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.398859  123537 main.go:141] libmachine: (embed-certs-666637) DBG | skip adding static IP to network mk-embed-certs-666637 - found existing host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"}
	I0316 00:16:33.398883  123537 main.go:141] libmachine: (embed-certs-666637) Reserved static IP address: 192.168.61.91
	I0316 00:16:33.398896  123537 main.go:141] libmachine: (embed-certs-666637) Waiting for SSH to be available...
	I0316 00:16:33.398905  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Getting to WaitForSSH function...
	I0316 00:16:33.401376  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.401835  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.401872  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.402054  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH client type: external
	I0316 00:16:33.402082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa (-rw-------)
	I0316 00:16:33.402113  123537 main.go:141] libmachine: (embed-certs-666637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:33.402141  123537 main.go:141] libmachine: (embed-certs-666637) DBG | About to run SSH command:
	I0316 00:16:33.402188  123537 main.go:141] libmachine: (embed-certs-666637) DBG | exit 0
	I0316 00:16:33.523353  123537 main.go:141] libmachine: (embed-certs-666637) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:33.523747  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetConfigRaw
	I0316 00:16:33.524393  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.526639  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527046  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.527080  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527278  123537 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:16:33.527509  123537 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:33.527527  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:33.527766  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.529906  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.530210  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530341  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.530596  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530816  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530953  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.531119  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.531334  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.531348  123537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:33.635573  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:33.635601  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.635879  123537 buildroot.go:166] provisioning hostname "embed-certs-666637"
	I0316 00:16:33.635905  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.636109  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.638998  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639369  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.639417  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639629  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.639795  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.639971  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.640103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.640366  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.640524  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.640543  123537 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-666637 && echo "embed-certs-666637" | sudo tee /etc/hostname
	I0316 00:16:33.757019  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-666637
	
	I0316 00:16:33.757049  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.759808  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760120  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.760154  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760375  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.760583  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760723  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760829  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.760951  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.761121  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.761144  123537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-666637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-666637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-666637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:33.873548  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:33.873587  123537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:33.873642  123537 buildroot.go:174] setting up certificates
	I0316 00:16:33.873654  123537 provision.go:84] configureAuth start
	I0316 00:16:33.873666  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.873986  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.876609  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.876976  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.877004  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.877194  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.879624  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880156  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.880185  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880300  123537 provision.go:143] copyHostCerts
	I0316 00:16:33.880359  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:33.880370  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:33.880441  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:33.880526  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:33.880534  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:33.880558  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:33.880625  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:33.880632  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:33.880653  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:33.880707  123537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.embed-certs-666637 san=[127.0.0.1 192.168.61.91 embed-certs-666637 localhost minikube]
	I0316 00:16:33.984403  123537 provision.go:177] copyRemoteCerts
	I0316 00:16:33.984471  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:33.984499  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.987297  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987711  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.987741  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987894  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.988108  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.988284  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.988456  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.069540  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:34.094494  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 00:16:34.119198  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:34.144669  123537 provision.go:87] duration metric: took 271.000471ms to configureAuth
	I0316 00:16:34.144701  123537 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:34.144891  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:34.144989  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.148055  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148464  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.148496  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148710  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.148918  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149097  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149251  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.149416  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.149580  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.149596  123537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:34.414026  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:34.414058  123537 machine.go:97] duration metric: took 886.536134ms to provisionDockerMachine
	I0316 00:16:34.414070  123537 start.go:293] postStartSetup for "embed-certs-666637" (driver="kvm2")
	I0316 00:16:34.414081  123537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:34.414101  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.414464  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:34.414497  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.417211  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417482  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.417520  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417617  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.417804  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.417990  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.418126  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.498223  123537 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:34.502954  123537 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:34.502989  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:34.503068  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:34.503156  123537 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:34.503258  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:34.513065  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:34.537606  123537 start.go:296] duration metric: took 123.521431ms for postStartSetup
	I0316 00:16:34.537657  123537 fix.go:56] duration metric: took 18.450434099s for fixHost
	I0316 00:16:34.537679  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.540574  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.540908  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.540950  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.541086  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.541302  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541471  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541609  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.541803  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.542009  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.542025  123537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:34.648381  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548194.613058580
	
	I0316 00:16:34.648419  123537 fix.go:216] guest clock: 1710548194.613058580
	I0316 00:16:34.648427  123537 fix.go:229] Guest: 2024-03-16 00:16:34.61305858 +0000 UTC Remote: 2024-03-16 00:16:34.537661993 +0000 UTC m=+286.854063579 (delta=75.396587ms)
	I0316 00:16:34.648454  123537 fix.go:200] guest clock delta is within tolerance: 75.396587ms
	I0316 00:16:34.648459  123537 start.go:83] releasing machines lock for "embed-certs-666637", held for 18.561300744s
	I0316 00:16:34.648483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.648770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:34.651350  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651748  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.651794  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651926  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652573  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652810  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652907  123537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:34.652965  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.653064  123537 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:34.653090  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.655796  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656121  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656149  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656170  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656281  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656461  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.656562  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656586  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656640  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.656739  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656807  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.656883  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.657023  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.657249  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.759596  123537 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:34.765571  123537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:34.915897  123537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:34.923372  123537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:34.923471  123537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:34.940579  123537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:34.940613  123537 start.go:494] detecting cgroup driver to use...
	I0316 00:16:34.940699  123537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:34.957640  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:34.971525  123537 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:34.971598  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:34.987985  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:35.001952  123537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:35.124357  123537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:35.273948  123537 docker.go:233] disabling docker service ...
	I0316 00:16:35.274037  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:35.291073  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:35.311209  123537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:35.460630  123537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:35.581263  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:35.596460  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:35.617992  123537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:35.618042  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.628372  123537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:35.628426  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.639487  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.650397  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.662065  123537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:35.676003  123537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:35.686159  123537 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:35.686241  123537 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:35.699814  123537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:35.710182  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:35.831831  123537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:35.977556  123537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:35.977638  123537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:35.982729  123537 start.go:562] Will wait 60s for crictl version
	I0316 00:16:35.982806  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:16:35.986695  123537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:36.023299  123537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:36.023412  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.055441  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.090313  123537 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:36.091622  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:36.094687  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095062  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:36.095098  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095277  123537 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:36.099781  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:36.113522  123537 kubeadm.go:877] updating cluster {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:36.113674  123537 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:36.113743  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:36.152208  123537 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:36.152300  123537 ssh_runner.go:195] Run: which lz4
	I0316 00:16:36.156802  123537 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:36.161430  123537 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:36.161472  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:35.911510  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting to get IP...
	I0316 00:16:35.912562  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.912986  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.913064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:35.912955  124655 retry.go:31] will retry after 248.147893ms: waiting for machine to come up
	I0316 00:16:36.162476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163094  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163127  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.163032  124655 retry.go:31] will retry after 387.219214ms: waiting for machine to come up
	I0316 00:16:36.551678  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552203  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552236  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.552178  124655 retry.go:31] will retry after 391.385671ms: waiting for machine to come up
	I0316 00:16:36.945741  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946275  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.946216  124655 retry.go:31] will retry after 470.449619ms: waiting for machine to come up
	I0316 00:16:37.417836  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418324  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418353  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.418259  124655 retry.go:31] will retry after 508.962644ms: waiting for machine to come up
	I0316 00:16:37.929194  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929710  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.929671  124655 retry.go:31] will retry after 877.538639ms: waiting for machine to come up
	I0316 00:16:38.808551  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809061  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809100  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:38.809002  124655 retry.go:31] will retry after 754.319242ms: waiting for machine to come up
	I0316 00:16:39.565060  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565475  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565512  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:39.565411  124655 retry.go:31] will retry after 1.472475348s: waiting for machine to come up
	I0316 00:16:37.946470  123537 crio.go:444] duration metric: took 1.789700065s to copy over tarball
	I0316 00:16:37.946552  123537 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:40.497841  123537 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551257887s)
	I0316 00:16:40.497867  123537 crio.go:451] duration metric: took 2.551367803s to extract the tarball
	I0316 00:16:40.497875  123537 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:40.539695  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:40.588945  123537 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:40.588974  123537 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:40.588983  123537 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.28.4 crio true true} ...
	I0316 00:16:40.589125  123537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-666637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:40.589216  123537 ssh_runner.go:195] Run: crio config
	I0316 00:16:40.641673  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:40.641702  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:40.641719  123537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:40.641754  123537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-666637 NodeName:embed-certs-666637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:40.641939  123537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-666637"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:40.642024  123537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:40.652461  123537 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:40.652539  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:40.662114  123537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 00:16:40.679782  123537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:40.701982  123537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0316 00:16:40.720088  123537 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:40.724199  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:40.737133  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:40.860343  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:40.878437  123537 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637 for IP: 192.168.61.91
	I0316 00:16:40.878466  123537 certs.go:194] generating shared ca certs ...
	I0316 00:16:40.878489  123537 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:40.878690  123537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:40.878766  123537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:40.878779  123537 certs.go:256] generating profile certs ...
	I0316 00:16:40.878888  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/client.key
	I0316 00:16:40.878990  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key.07955952
	I0316 00:16:40.879059  123537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key
	I0316 00:16:40.879178  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:40.879225  123537 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:40.879239  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:40.879271  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:40.879302  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:40.879352  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:40.879409  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:40.880141  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:40.924047  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:40.962441  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:41.000283  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:41.034353  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 00:16:41.069315  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:16:41.100325  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:16:41.129285  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:16:41.155899  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:16:41.180657  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:16:41.205961  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:16:41.231886  123537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:16:41.249785  123537 ssh_runner.go:195] Run: openssl version
	I0316 00:16:41.255703  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:16:41.266968  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271536  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271595  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.277460  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:16:41.288854  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:16:41.300302  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305189  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305256  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.311200  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:16:41.322784  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:16:41.334879  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339774  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339837  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.345746  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:16:41.357661  123537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:16:41.362469  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:16:41.368875  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:16:41.375759  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:16:41.382518  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:16:41.388629  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:16:41.394882  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:16:41.401114  123537 kubeadm.go:391] StartCluster: {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:16:41.401243  123537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:16:41.401304  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.449499  123537 cri.go:89] found id: ""
	I0316 00:16:41.449590  123537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:16:41.461139  123537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:16:41.461165  123537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:16:41.461173  123537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:16:41.461243  123537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:16:41.473648  123537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:16:41.474652  123537 kubeconfig.go:125] found "embed-certs-666637" server: "https://192.168.61.91:8443"
	I0316 00:16:41.476724  123537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:16:41.488387  123537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0316 00:16:41.488426  123537 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:16:41.488439  123537 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:16:41.488485  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.526197  123537 cri.go:89] found id: ""
	I0316 00:16:41.526283  123537 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:16:41.545489  123537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:16:41.555977  123537 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:16:41.555998  123537 kubeadm.go:156] found existing configuration files:
	
	I0316 00:16:41.556048  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:16:41.565806  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:16:41.565891  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:16:41.575646  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:16:41.585269  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:16:41.585329  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:16:41.595336  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.605081  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:16:41.605144  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.615182  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:16:41.624781  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:16:41.624837  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:16:41.634852  123537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:16:41.644749  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.748782  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.477775  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.688730  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.039441  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039924  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:41.039885  124655 retry.go:31] will retry after 1.408692905s: waiting for machine to come up
	I0316 00:16:42.449971  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450402  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:42.450355  124655 retry.go:31] will retry after 1.539639877s: waiting for machine to come up
	I0316 00:16:43.992314  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992833  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992869  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:43.992777  124655 retry.go:31] will retry after 2.297369864s: waiting for machine to come up
	I0316 00:16:42.777223  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.944089  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:16:42.944193  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.445082  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.945117  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.963812  123537 api_server.go:72] duration metric: took 1.019723734s to wait for apiserver process to appear ...
	I0316 00:16:43.963845  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:16:43.963871  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.924208  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.924258  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.924278  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.953212  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.953245  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.964449  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.988201  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.988232  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:47.464502  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.469385  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.469421  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:47.964483  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.970448  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.970492  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:48.463984  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:48.468908  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:16:48.476120  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:16:48.476153  123537 api_server.go:131] duration metric: took 4.512298176s to wait for apiserver health ...
	I0316 00:16:48.476164  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:48.476172  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:48.478076  123537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:16:48.479565  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:16:48.490129  123537 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:16:48.516263  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:16:48.532732  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:16:48.532768  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:16:48.532778  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:16:48.532788  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:16:48.532795  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:16:48.532801  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:16:48.532808  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:16:48.532815  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:16:48.532822  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:16:48.532833  123537 system_pods.go:74] duration metric: took 16.547677ms to wait for pod list to return data ...
	I0316 00:16:48.532845  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:16:48.535945  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:16:48.535989  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:16:48.536006  123537 node_conditions.go:105] duration metric: took 3.154184ms to run NodePressure ...
	I0316 00:16:48.536027  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:48.733537  123537 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739166  123537 kubeadm.go:733] kubelet initialised
	I0316 00:16:48.739196  123537 kubeadm.go:734] duration metric: took 5.63118ms waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739209  123537 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:48.744724  123537 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.750261  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750299  123537 pod_ready.go:81] duration metric: took 5.547917ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.750310  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750323  123537 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.755340  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755362  123537 pod_ready.go:81] duration metric: took 5.029639ms for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.755371  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755379  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.761104  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761128  123537 pod_ready.go:81] duration metric: took 5.740133ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.761138  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761146  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.921215  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921244  123537 pod_ready.go:81] duration metric: took 160.08501ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.921254  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921260  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.319922  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319954  123537 pod_ready.go:81] duration metric: took 398.685799ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.319963  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319969  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.720866  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720922  123537 pod_ready.go:81] duration metric: took 400.944023ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.720948  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720967  123537 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:50.120836  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120865  123537 pod_ready.go:81] duration metric: took 399.883676ms for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:50.120875  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120882  123537 pod_ready.go:38] duration metric: took 1.381661602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:50.120923  123537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:16:50.133619  123537 ops.go:34] apiserver oom_adj: -16
	I0316 00:16:50.133653  123537 kubeadm.go:591] duration metric: took 8.672472438s to restartPrimaryControlPlane
	I0316 00:16:50.133663  123537 kubeadm.go:393] duration metric: took 8.732557685s to StartCluster
	I0316 00:16:50.133684  123537 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.133760  123537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:16:50.135355  123537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.135613  123537 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:16:50.140637  123537 out.go:177] * Verifying Kubernetes components...
	I0316 00:16:50.135727  123537 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:16:50.135843  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:50.142015  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:50.142027  123537 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-666637"
	I0316 00:16:50.142050  123537 addons.go:69] Setting default-storageclass=true in profile "embed-certs-666637"
	I0316 00:16:50.142070  123537 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-666637"
	W0316 00:16:50.142079  123537 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:16:50.142090  123537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-666637"
	I0316 00:16:50.142092  123537 addons.go:69] Setting metrics-server=true in profile "embed-certs-666637"
	I0316 00:16:50.142121  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142124  123537 addons.go:234] Setting addon metrics-server=true in "embed-certs-666637"
	W0316 00:16:50.142136  123537 addons.go:243] addon metrics-server should already be in state true
	I0316 00:16:50.142168  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142439  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142468  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142558  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142577  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.156773  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0316 00:16:50.156804  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0316 00:16:50.157267  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157268  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157591  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0316 00:16:50.157835  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157841  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157857  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157858  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157925  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.158223  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158226  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158404  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.158419  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.158731  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158753  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158795  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158828  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158932  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.159126  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.162347  123537 addons.go:234] Setting addon default-storageclass=true in "embed-certs-666637"
	W0316 00:16:50.162365  123537 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:16:50.162392  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.162612  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.162649  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.172299  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0316 00:16:50.172676  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.173173  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.173193  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.173547  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.173770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.175668  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.177676  123537 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:16:50.175968  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0316 00:16:50.176110  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0316 00:16:50.179172  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:16:50.179189  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:16:50.179206  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.179453  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179538  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179888  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.179909  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180021  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.180037  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180266  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180385  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180613  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.180788  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.180811  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.185060  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.192504  123537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:16:46.292804  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293326  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293363  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:46.293267  124655 retry.go:31] will retry after 2.301997121s: waiting for machine to come up
	I0316 00:16:48.596337  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596777  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:48.596731  124655 retry.go:31] will retry after 3.159447069s: waiting for machine to come up
	I0316 00:16:50.186146  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.186717  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.193945  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.193971  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.194051  123537 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.194079  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:16:50.194100  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.194103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.194264  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.194420  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.196511  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0316 00:16:50.197160  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.197580  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.197598  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.197658  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198007  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.198039  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.198038  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198235  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.198237  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.198435  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.198612  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.198772  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.200270  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.200540  123537 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.200554  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:16:50.200566  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.203147  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203634  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.203655  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203765  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.203966  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.204201  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.204335  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.317046  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:50.340203  123537 node_ready.go:35] waiting up to 6m0s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:50.415453  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.423732  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.424648  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:16:50.424663  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:16:50.470134  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:16:50.470164  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:16:50.518806  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:50.518833  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:16:50.570454  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:51.627153  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203388401s)
	I0316 00:16:51.627211  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627222  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627419  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211925303s)
	I0316 00:16:51.627468  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627533  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627595  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627609  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627620  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627549  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627859  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627885  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627895  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627914  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627956  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627976  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.629345  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.633811  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.633831  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.634043  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.634081  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726400  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.15588774s)
	I0316 00:16:51.726458  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726472  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.726820  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.726853  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.726875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726889  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726898  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.727178  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.727193  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.727206  123537 addons.go:470] Verifying addon metrics-server=true in "embed-certs-666637"
	I0316 00:16:51.729277  123537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0316 00:16:51.730645  123537 addons.go:505] duration metric: took 1.594919212s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0316 00:16:52.344107  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:51.757133  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757570  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Found IP for machine: 192.168.72.198
	I0316 00:16:51.757603  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has current primary IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserving static IP address...
	I0316 00:16:51.758067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.758093  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | skip adding static IP to network mk-default-k8s-diff-port-313436 - found existing host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"}
	I0316 00:16:51.758110  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserved static IP address: 192.168.72.198
	I0316 00:16:51.758120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Getting to WaitForSSH function...
	I0316 00:16:51.758138  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for SSH to be available...
	I0316 00:16:51.760276  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760596  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.760632  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760711  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH client type: external
	I0316 00:16:51.760744  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa (-rw-------)
	I0316 00:16:51.760797  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:51.760820  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | About to run SSH command:
	I0316 00:16:51.760861  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | exit 0
	I0316 00:16:51.887432  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:51.887829  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetConfigRaw
	I0316 00:16:51.888471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:51.891514  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.891923  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.891949  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.892232  123819 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:16:51.892502  123819 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:51.892527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:51.892782  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:51.895025  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.895367  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:51.895683  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895841  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:51.896178  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:51.896361  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:51.896372  123819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:52.012107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:52.012154  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012405  123819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-313436"
	I0316 00:16:52.012434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012640  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.015307  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.015823  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.015847  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.016055  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.016266  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016433  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016565  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.016758  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.016976  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.016992  123819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313436 && echo "default-k8s-diff-port-313436" | sudo tee /etc/hostname
	I0316 00:16:52.149152  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313436
	
	I0316 00:16:52.149180  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.152472  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.152852  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.152896  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.153056  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.153239  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153412  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.153837  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.154077  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.154108  123819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:52.285258  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:52.285290  123819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:52.285313  123819 buildroot.go:174] setting up certificates
	I0316 00:16:52.285323  123819 provision.go:84] configureAuth start
	I0316 00:16:52.285331  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.285631  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:52.288214  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288494  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.288527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288699  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.290965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291354  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.291380  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291571  123819 provision.go:143] copyHostCerts
	I0316 00:16:52.291644  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:52.291658  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:52.291719  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:52.291827  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:52.291839  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:52.291868  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:52.291966  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:52.291978  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:52.292005  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:52.292095  123819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313436 san=[127.0.0.1 192.168.72.198 default-k8s-diff-port-313436 localhost minikube]
	I0316 00:16:52.536692  123819 provision.go:177] copyRemoteCerts
	I0316 00:16:52.536756  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:52.536790  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.539525  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.539805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.539837  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.540067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.540264  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.540424  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.540599  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:52.629139  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:52.655092  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0316 00:16:52.681372  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:52.706496  123819 provision.go:87] duration metric: took 421.160351ms to configureAuth
	I0316 00:16:52.706529  123819 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:52.706737  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:52.706828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.709743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710173  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.710198  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710403  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.710616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710822  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710983  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.711148  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.711359  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.711380  123819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:53.005107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:53.005138  123819 machine.go:97] duration metric: took 1.112619102s to provisionDockerMachine
	I0316 00:16:53.005153  123819 start.go:293] postStartSetup for "default-k8s-diff-port-313436" (driver="kvm2")
	I0316 00:16:53.005166  123819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:53.005185  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.005547  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:53.005581  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.008749  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009170  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.009196  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009416  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.009617  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.009795  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.009973  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.100468  123819 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:53.105158  123819 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:53.105181  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:53.105243  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:53.105314  123819 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:53.105399  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:53.116078  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:53.142400  123819 start.go:296] duration metric: took 137.231635ms for postStartSetup
	I0316 00:16:53.142454  123819 fix.go:56] duration metric: took 18.493815855s for fixHost
	I0316 00:16:53.142483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.145282  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145658  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.145688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145878  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.146104  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146288  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146445  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.146625  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:53.146820  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:53.146834  123819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:53.260232  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548213.237261690
	
	I0316 00:16:53.260255  123819 fix.go:216] guest clock: 1710548213.237261690
	I0316 00:16:53.260262  123819 fix.go:229] Guest: 2024-03-16 00:16:53.23726169 +0000 UTC Remote: 2024-03-16 00:16:53.142460792 +0000 UTC m=+262.706636561 (delta=94.800898ms)
	I0316 00:16:53.260292  123819 fix.go:200] guest clock delta is within tolerance: 94.800898ms
	I0316 00:16:53.260298  123819 start.go:83] releasing machines lock for "default-k8s-diff-port-313436", held for 18.611697781s
	I0316 00:16:53.260323  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.260629  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:53.263641  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264002  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.264032  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.264889  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265217  123819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:53.265273  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.265404  123819 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:53.265434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.268274  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268538  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268684  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268727  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.268969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268995  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.269113  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269206  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.269298  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269419  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.269476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269572  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.372247  123819 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:53.378643  123819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:53.527036  123819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:53.534220  123819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:53.534312  123819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:53.554856  123819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:53.554900  123819 start.go:494] detecting cgroup driver to use...
	I0316 00:16:53.554971  123819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:53.580723  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:53.599919  123819 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:53.599996  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:53.613989  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:53.628748  123819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:53.745409  123819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:53.906668  123819 docker.go:233] disabling docker service ...
	I0316 00:16:53.906733  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:53.928452  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:53.949195  123819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:54.118868  123819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:54.250006  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:54.264754  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:54.285825  123819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:54.285890  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.298522  123819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:54.298590  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.311118  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.323928  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.336128  123819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:54.348715  123819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:54.359657  123819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:54.359718  123819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:54.376411  123819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:54.388136  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:54.530444  123819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:54.681895  123819 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:54.681984  123819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:54.687334  123819 start.go:562] Will wait 60s for crictl version
	I0316 00:16:54.687398  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:16:54.691443  123819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:54.730408  123819 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:54.730505  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.761591  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.792351  123819 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:54.793693  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:54.797023  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797439  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:54.797471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797665  123819 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:54.802065  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:54.815168  123819 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:54.815285  123819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:54.815345  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:54.855493  123819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:54.855553  123819 ssh_runner.go:195] Run: which lz4
	I0316 00:16:54.860096  123819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:54.865644  123819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:54.865675  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:54.345117  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:56.346342  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:57.346164  123537 node_ready.go:49] node "embed-certs-666637" has status "Ready":"True"
	I0316 00:16:57.346194  123537 node_ready.go:38] duration metric: took 7.005950923s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:57.346207  123537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:57.361331  123537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377726  123537 pod_ready.go:92] pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace has status "Ready":"True"
	I0316 00:16:57.377750  123537 pod_ready.go:81] duration metric: took 16.388353ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377760  123537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:16:56.676506  123819 crio.go:444] duration metric: took 1.816442841s to copy over tarball
	I0316 00:16:56.676609  123819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:59.338617  123819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661966532s)
	I0316 00:16:59.338655  123819 crio.go:451] duration metric: took 2.662115388s to extract the tarball
	I0316 00:16:59.338665  123819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:59.387693  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:59.453534  123819 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:59.453565  123819 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:59.453575  123819 kubeadm.go:928] updating node { 192.168.72.198 8444 v1.28.4 crio true true} ...
	I0316 00:16:59.453744  123819 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-313436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:59.453841  123819 ssh_runner.go:195] Run: crio config
	I0316 00:16:59.518492  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:16:59.518525  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:59.518543  123819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:59.518572  123819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.198 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313436 NodeName:default-k8s-diff-port-313436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:59.518791  123819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.198
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313436"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:59.518876  123819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:59.529778  123819 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:59.529860  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:59.542186  123819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0316 00:16:59.563037  123819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:59.585167  123819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 00:16:59.607744  123819 ssh_runner.go:195] Run: grep 192.168.72.198	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:59.612687  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:59.628607  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:59.767487  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:59.786494  123819 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436 for IP: 192.168.72.198
	I0316 00:16:59.786520  123819 certs.go:194] generating shared ca certs ...
	I0316 00:16:59.786545  123819 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:59.786688  123819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:59.786722  123819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:59.786728  123819 certs.go:256] generating profile certs ...
	I0316 00:16:59.786827  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.key
	I0316 00:16:59.786975  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key.254d5830
	I0316 00:16:59.787049  123819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key
	I0316 00:16:59.787204  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:59.787248  123819 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:59.787262  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:59.787295  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:59.787351  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:59.787386  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:59.787449  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:59.788288  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:59.824257  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:59.859470  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:59.904672  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:59.931832  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0316 00:16:59.965654  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:00.006949  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:00.039120  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:00.071341  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:00.095585  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:00.122165  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:00.149982  123819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:00.170019  123819 ssh_runner.go:195] Run: openssl version
	I0316 00:17:00.176232  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:00.188738  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193708  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193780  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.200433  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:00.215116  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:00.228871  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234074  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234141  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.240553  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:00.252454  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:00.264690  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269493  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269573  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.275584  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:00.287859  123819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:00.292474  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:00.298744  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:00.304793  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:00.311156  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:00.317777  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:00.324148  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:00.330667  123819 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:00.330763  123819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:00.330813  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.374868  123819 cri.go:89] found id: ""
	I0316 00:17:00.374961  123819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:00.386218  123819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:00.386240  123819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:00.386245  123819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:00.386288  123819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:00.397129  123819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:00.398217  123819 kubeconfig.go:125] found "default-k8s-diff-port-313436" server: "https://192.168.72.198:8444"
	I0316 00:17:00.400506  123819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:00.411430  123819 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.198
	I0316 00:17:00.411462  123819 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:00.411477  123819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:00.411528  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.448545  123819 cri.go:89] found id: ""
	I0316 00:17:00.448619  123819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:00.469230  123819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:00.480622  123819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:00.480644  123819 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:00.480695  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0316 00:16:59.384420  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.094272  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.390117  123537 pod_ready.go:92] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.390145  123537 pod_ready.go:81] duration metric: took 5.012377671s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.390156  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398207  123537 pod_ready.go:92] pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.398236  123537 pod_ready.go:81] duration metric: took 8.071855ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398248  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405415  123537 pod_ready.go:92] pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.405443  123537 pod_ready.go:81] duration metric: took 7.186495ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405453  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412646  123537 pod_ready.go:92] pod "kube-proxy-8fpc5" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.412665  123537 pod_ready.go:81] duration metric: took 7.204465ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412673  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606336  123537 pod_ready.go:92] pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.606369  123537 pod_ready.go:81] duration metric: took 193.687951ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606384  123537 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:00.492088  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:00.743504  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:00.756322  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0316 00:17:00.766476  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:00.766545  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:00.776849  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.786610  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:00.786676  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.797455  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0316 00:17:00.808026  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:00.808083  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:00.819306  123819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:00.834822  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:00.962203  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.535753  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.762322  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.843195  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.944855  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:01.944971  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.446047  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.945791  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.983641  123819 api_server.go:72] duration metric: took 1.038786332s to wait for apiserver process to appear ...
	I0316 00:17:02.983680  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:02.983704  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:04.615157  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:07.114447  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:06.343729  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.343763  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.343786  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.364621  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.364659  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.483852  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.491403  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.491433  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:06.983931  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.994258  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.994296  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.483821  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.506265  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:07.506301  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.983846  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.988700  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:17:07.995996  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:17:07.996021  123819 api_server.go:131] duration metric: took 5.012333318s to wait for apiserver health ...
	I0316 00:17:07.996032  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:17:07.996041  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:07.998091  123819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:17:07.999628  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:17:08.010263  123819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:17:08.041667  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:17:08.053611  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:17:08.053656  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:17:08.053668  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:17:08.053681  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:17:08.053694  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:17:08.053706  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:17:08.053717  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:17:08.053730  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:17:08.053739  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:17:08.053747  123819 system_pods.go:74] duration metric: took 12.054433ms to wait for pod list to return data ...
	I0316 00:17:08.053763  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:17:08.057781  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:17:08.057808  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:17:08.057818  123819 node_conditions.go:105] duration metric: took 4.047698ms to run NodePressure ...
	I0316 00:17:08.057837  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:08.282870  123819 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288338  123819 kubeadm.go:733] kubelet initialised
	I0316 00:17:08.288359  123819 kubeadm.go:734] duration metric: took 5.456436ms waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288367  123819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:08.294256  123819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.302762  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302802  123819 pod_ready.go:81] duration metric: took 8.523485ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.302814  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302823  123819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.309581  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309604  123819 pod_ready.go:81] duration metric: took 6.77179ms for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.309617  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309625  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.315399  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315419  123819 pod_ready.go:81] duration metric: took 5.78558ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.315428  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315434  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.445776  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445808  123819 pod_ready.go:81] duration metric: took 130.363739ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.445821  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445829  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.846181  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846228  123819 pod_ready.go:81] duration metric: took 400.382095ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.846243  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846251  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.245568  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245599  123819 pod_ready.go:81] duration metric: took 399.329058ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.245612  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245618  123819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.646855  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646888  123819 pod_ready.go:81] duration metric: took 401.262603ms for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.646901  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646909  123819 pod_ready.go:38] duration metric: took 1.358531936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:09.646926  123819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:17:09.659033  123819 ops.go:34] apiserver oom_adj: -16
	I0316 00:17:09.659059  123819 kubeadm.go:591] duration metric: took 9.272806311s to restartPrimaryControlPlane
	I0316 00:17:09.659070  123819 kubeadm.go:393] duration metric: took 9.328414192s to StartCluster
	I0316 00:17:09.659091  123819 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.659166  123819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:09.661439  123819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.661729  123819 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:17:09.663462  123819 out.go:177] * Verifying Kubernetes components...
	I0316 00:17:09.661800  123819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:17:09.661986  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:17:09.664841  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:09.664874  123819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664839  123819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664964  123819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.664980  123819 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:17:09.664847  123819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.665023  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.665037  123819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.665053  123819 addons.go:243] addon metrics-server should already be in state true
	I0316 00:17:09.665084  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.664922  123819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313436"
	I0316 00:17:09.665349  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665377  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665445  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665474  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665607  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665637  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.680337  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0316 00:17:09.680351  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0316 00:17:09.680799  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.680939  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.681331  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681366  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681541  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681560  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681736  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.681974  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.682359  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682407  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.682461  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682494  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.683660  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0316 00:17:09.684088  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.684575  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.684600  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.684992  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.685218  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.688973  123819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.688994  123819 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:17:09.689028  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.689372  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.689397  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.698126  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0316 00:17:09.698527  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.699052  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.699079  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.699407  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.699606  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.700389  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0316 00:17:09.700824  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.701308  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.701327  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.701610  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.701681  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.704168  123819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:17:09.701891  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.704403  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0316 00:17:09.706042  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:17:09.706076  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:17:09.706102  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.706988  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.707805  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.707831  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.708465  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.708556  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.709451  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.709500  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.709520  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.711354  123819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:09.709911  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.710103  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.712849  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.712865  123819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:09.712886  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:17:09.712910  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.713010  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.713202  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.713365  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.715688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716029  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.716064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716260  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.716437  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.716662  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.716826  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.725309  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0316 00:17:09.725659  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.726175  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.726191  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.726492  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.726665  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.728459  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.728721  123819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.728739  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:17:09.728753  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.732122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732546  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.732576  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732733  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.732908  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.733064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.733206  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.838182  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:09.857248  123819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:09.956751  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:17:09.956775  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:17:09.982142  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.992293  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:17:09.992319  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:17:10.000878  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:10.035138  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:10.035171  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:17:10.066721  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:11.153759  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171576504s)
	I0316 00:17:11.153815  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.153828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154237  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154241  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154262  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.154271  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.154281  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154569  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154601  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154609  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165531  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.165579  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.165868  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.165922  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165879  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536530  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.469764101s)
	I0316 00:17:11.536596  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536607  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536648  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53572281s)
	I0316 00:17:11.536694  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536713  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536963  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536988  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536995  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537001  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537005  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537010  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537013  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537019  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537218  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537365  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537376  123819 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-313436"
	I0316 00:17:11.537404  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537425  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.539481  123819 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0316 00:17:09.114699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:11.613507  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:13.204814  123454 start.go:364] duration metric: took 52.116735477s to acquireMachinesLock for "no-preload-238598"
	I0316 00:17:13.204888  123454 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:17:13.204900  123454 fix.go:54] fixHost starting: 
	I0316 00:17:13.205405  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:13.205446  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:13.222911  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0316 00:17:13.223326  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:13.223784  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:17:13.223811  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:13.224153  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:13.224338  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:13.224507  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:17:13.226028  123454 fix.go:112] recreateIfNeeded on no-preload-238598: state=Stopped err=<nil>
	I0316 00:17:13.226051  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	W0316 00:17:13.226232  123454 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:17:13.227865  123454 out.go:177] * Restarting existing kvm2 VM for "no-preload-238598" ...
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:11.540876  123819 addons.go:505] duration metric: took 1.87908534s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0316 00:17:11.862772  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.866333  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.229181  123454 main.go:141] libmachine: (no-preload-238598) Calling .Start
	I0316 00:17:13.229409  123454 main.go:141] libmachine: (no-preload-238598) Ensuring networks are active...
	I0316 00:17:13.230257  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network default is active
	I0316 00:17:13.230618  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network mk-no-preload-238598 is active
	I0316 00:17:13.231135  123454 main.go:141] libmachine: (no-preload-238598) Getting domain xml...
	I0316 00:17:13.232023  123454 main.go:141] libmachine: (no-preload-238598) Creating domain...
	I0316 00:17:14.513800  123454 main.go:141] libmachine: (no-preload-238598) Waiting to get IP...
	I0316 00:17:14.514838  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.515446  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.515520  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.515407  125029 retry.go:31] will retry after 275.965955ms: waiting for machine to come up
	I0316 00:17:14.793095  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.793594  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.793721  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.793667  125029 retry.go:31] will retry after 347.621979ms: waiting for machine to come up
	I0316 00:17:15.143230  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.143869  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.143909  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.143820  125029 retry.go:31] will retry after 301.441766ms: waiting for machine to come up
	I0316 00:17:15.446476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.446917  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.446964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.446865  125029 retry.go:31] will retry after 431.207345ms: waiting for machine to come up
	I0316 00:17:13.615911  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.616381  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:17.618352  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:16.362143  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:16.866488  123819 node_ready.go:49] node "default-k8s-diff-port-313436" has status "Ready":"True"
	I0316 00:17:16.866522  123819 node_ready.go:38] duration metric: took 7.00923342s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:16.866535  123819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:16.881909  123819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897574  123819 pod_ready.go:92] pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:16.897617  123819 pod_ready.go:81] duration metric: took 15.618728ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897630  123819 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:18.910740  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.879693  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.880186  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.880222  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.880148  125029 retry.go:31] will retry after 747.650888ms: waiting for machine to come up
	I0316 00:17:16.629378  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:16.631312  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:16.631352  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:16.631193  125029 retry.go:31] will retry after 670.902171ms: waiting for machine to come up
	I0316 00:17:17.304282  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:17.304704  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:17.304751  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:17.304658  125029 retry.go:31] will retry after 1.160879196s: waiting for machine to come up
	I0316 00:17:18.466662  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:18.467103  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:18.467136  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:18.467049  125029 retry.go:31] will retry after 948.597188ms: waiting for machine to come up
	I0316 00:17:19.417144  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:19.417623  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:19.417657  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:19.417561  125029 retry.go:31] will retry after 1.263395738s: waiting for machine to come up
	I0316 00:17:20.289713  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.613643  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:21.865146  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.241535  123819 pod_ready.go:92] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.241561  123819 pod_ready.go:81] duration metric: took 5.34392174s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.241573  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247469  123819 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.247501  123819 pod_ready.go:81] duration metric: took 5.919787ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247515  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756151  123819 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.756180  123819 pod_ready.go:81] duration metric: took 508.652978ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756194  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762214  123819 pod_ready.go:92] pod "kube-proxy-btmmm" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.762254  123819 pod_ready.go:81] duration metric: took 6.041426ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762268  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769644  123819 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.769668  123819 pod_ready.go:81] duration metric: took 7.391813ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769681  123819 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:24.780737  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.682443  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:20.798804  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:20.798840  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:20.682821  125029 retry.go:31] will retry after 1.834378571s: waiting for machine to come up
	I0316 00:17:22.518539  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:22.518997  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:22.519027  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:22.518945  125029 retry.go:31] will retry after 1.944866033s: waiting for machine to come up
	I0316 00:17:24.466332  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:24.466902  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:24.466930  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:24.466847  125029 retry.go:31] will retry after 3.4483736s: waiting for machine to come up
	I0316 00:17:24.615642  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.113920  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.278017  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:29.777128  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.919457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:27.919931  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:27.919964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:27.919891  125029 retry.go:31] will retry after 3.122442649s: waiting for machine to come up
	I0316 00:17:29.613500  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.613674  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.276855  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:34.277228  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.044512  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:31.044939  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:31.044970  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:31.044884  125029 retry.go:31] will retry after 4.529863895s: waiting for machine to come up
	I0316 00:17:34.112266  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:36.118023  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:35.576311  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.576834  123454 main.go:141] libmachine: (no-preload-238598) Found IP for machine: 192.168.50.137
	I0316 00:17:35.576858  123454 main.go:141] libmachine: (no-preload-238598) Reserving static IP address...
	I0316 00:17:35.576875  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has current primary IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.577312  123454 main.go:141] libmachine: (no-preload-238598) Reserved static IP address: 192.168.50.137
	I0316 00:17:35.577355  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.577365  123454 main.go:141] libmachine: (no-preload-238598) Waiting for SSH to be available...
	I0316 00:17:35.577404  123454 main.go:141] libmachine: (no-preload-238598) DBG | skip adding static IP to network mk-no-preload-238598 - found existing host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"}
	I0316 00:17:35.577419  123454 main.go:141] libmachine: (no-preload-238598) DBG | Getting to WaitForSSH function...
	I0316 00:17:35.579640  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580061  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.580108  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580210  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH client type: external
	I0316 00:17:35.580269  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa (-rw-------)
	I0316 00:17:35.580303  123454 main.go:141] libmachine: (no-preload-238598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:35.580319  123454 main.go:141] libmachine: (no-preload-238598) DBG | About to run SSH command:
	I0316 00:17:35.580339  123454 main.go:141] libmachine: (no-preload-238598) DBG | exit 0
	I0316 00:17:35.711373  123454 main.go:141] libmachine: (no-preload-238598) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:35.711791  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetConfigRaw
	I0316 00:17:35.712598  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:35.715455  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.715929  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.715954  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.716326  123454 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:17:35.716525  123454 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:35.716551  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:35.716802  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.719298  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719612  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.719644  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719780  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.720005  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720178  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720315  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.720487  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.720666  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.720677  123454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:35.835733  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:35.835760  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836004  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:17:35.836033  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836240  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.839024  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839413  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.839445  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839627  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.839811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.839977  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.840133  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.840279  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.840485  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.840504  123454 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-238598 && echo "no-preload-238598" | sudo tee /etc/hostname
	I0316 00:17:35.976590  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-238598
	
	I0316 00:17:35.976624  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.979354  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979689  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.979720  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979879  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.980104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980267  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980445  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.980602  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.980796  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.980815  123454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-238598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-238598/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-238598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:36.106710  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:36.106750  123454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:36.106774  123454 buildroot.go:174] setting up certificates
	I0316 00:17:36.106786  123454 provision.go:84] configureAuth start
	I0316 00:17:36.106800  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:36.107104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.110050  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110431  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.110476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110592  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.113019  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113366  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.113391  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113517  123454 provision.go:143] copyHostCerts
	I0316 00:17:36.113595  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:36.113619  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:36.113699  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:36.113898  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:36.113911  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:36.113964  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:36.114051  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:36.114063  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:36.114089  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:36.114155  123454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.no-preload-238598 san=[127.0.0.1 192.168.50.137 localhost minikube no-preload-238598]
	I0316 00:17:36.239622  123454 provision.go:177] copyRemoteCerts
	I0316 00:17:36.239706  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:36.239736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.242440  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.242806  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.242841  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.243086  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.243279  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.243482  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.243623  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.330601  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:36.359600  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:17:36.384258  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:36.409195  123454 provision.go:87] duration metric: took 302.39571ms to configureAuth
	I0316 00:17:36.409239  123454 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:36.409440  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:17:36.409539  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.412280  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412618  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.412652  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.413039  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413217  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413366  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.413576  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.413803  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.413823  123454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:36.703300  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:36.703365  123454 machine.go:97] duration metric: took 986.82471ms to provisionDockerMachine
	I0316 00:17:36.703418  123454 start.go:293] postStartSetup for "no-preload-238598" (driver="kvm2")
	I0316 00:17:36.703440  123454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:36.703474  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.703838  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:36.703880  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.706655  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707019  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.707057  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707237  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.707470  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.707626  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.707822  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.794605  123454 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:36.799121  123454 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:36.799151  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:36.799222  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:36.799298  123454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:36.799423  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:36.808805  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:36.834244  123454 start.go:296] duration metric: took 130.803052ms for postStartSetup
	I0316 00:17:36.834290  123454 fix.go:56] duration metric: took 23.629390369s for fixHost
	I0316 00:17:36.834318  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.837197  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837643  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.837684  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837926  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.838155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838360  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838533  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.838721  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.838965  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.838982  123454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:36.956309  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548256.900043121
	
	I0316 00:17:36.956352  123454 fix.go:216] guest clock: 1710548256.900043121
	I0316 00:17:36.956366  123454 fix.go:229] Guest: 2024-03-16 00:17:36.900043121 +0000 UTC Remote: 2024-03-16 00:17:36.83429667 +0000 UTC m=+356.318603082 (delta=65.746451ms)
	I0316 00:17:36.956398  123454 fix.go:200] guest clock delta is within tolerance: 65.746451ms
	I0316 00:17:36.956425  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 23.751563248s
	I0316 00:17:36.956472  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.956736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.960077  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960494  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.960524  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960678  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961247  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961454  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961522  123454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:36.961588  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.961730  123454 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:36.961756  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.964457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964801  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.964834  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964905  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965346  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965374  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.965406  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965518  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.965609  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965681  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.965739  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965866  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.966034  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:37.077559  123454 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:37.084485  123454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:37.229503  123454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:37.236783  123454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:37.236862  123454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:37.255248  123454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:37.255275  123454 start.go:494] detecting cgroup driver to use...
	I0316 00:17:37.255377  123454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:37.272795  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:37.289822  123454 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:37.289885  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:37.306082  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:37.322766  123454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:37.448135  123454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:37.614316  123454 docker.go:233] disabling docker service ...
	I0316 00:17:37.614381  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:37.630091  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:37.645025  123454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:37.773009  123454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:37.891459  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:37.906829  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:37.927910  123454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:17:37.927982  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.939166  123454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:37.939226  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.950487  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.961547  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.972402  123454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:37.983413  123454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:37.993080  123454 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:37.993147  123454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:38.007746  123454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:38.017917  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:38.158718  123454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:38.329423  123454 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:38.329520  123454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:38.334518  123454 start.go:562] Will wait 60s for crictl version
	I0316 00:17:38.334570  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.338570  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:38.375688  123454 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:38.375779  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.408167  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.444754  123454 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.277480  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.281375  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.446078  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:38.448885  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449299  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:38.449329  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449565  123454 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:38.453922  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:38.467515  123454 kubeadm.go:877] updating cluster {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:38.467646  123454 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:17:38.467690  123454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:38.511057  123454 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:17:38.511093  123454 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:38.511189  123454 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.511221  123454 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0316 00:17:38.511240  123454 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.511253  123454 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.511305  123454 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.511335  123454 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.511338  123454 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.511188  123454 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.512934  123454 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.512949  123454 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.512953  123454 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0316 00:17:38.513014  123454 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.648129  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.650306  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.661334  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0316 00:17:38.666656  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.669280  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.684494  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.690813  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.760339  123454 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0316 00:17:38.760396  123454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.760449  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.760545  123454 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0316 00:17:38.760585  123454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.760641  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908463  123454 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0316 00:17:38.908491  123454 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0316 00:17:38.908515  123454 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.908525  123454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908579  123454 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0316 00:17:38.908607  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.908615  123454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.908585  123454 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908638  123454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.908739  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.954587  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.954611  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.954699  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.961857  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.961878  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0316 00:17:38.961979  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:38.962005  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.962010  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:39.052859  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.052888  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0316 00:17:39.052907  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.052958  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.052976  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.053001  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0316 00:17:39.052963  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.053055  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.053060  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0316 00:17:39.053100  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:39.053156  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.053235  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.120914  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.612614  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.779012  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:43.278631  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:41.133735  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.080597621s)
	I0316 00:17:41.133778  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0316 00:17:41.133890  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.080807025s)
	I0316 00:17:41.133924  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0316 00:17:41.133942  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.08085981s)
	I0316 00:17:41.133972  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133978  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.080988823s)
	I0316 00:17:41.133993  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133948  123454 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134011  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.080758975s)
	I0316 00:17:41.134031  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0316 00:17:41.134032  123454 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.01309054s)
	I0316 00:17:41.134060  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134083  123454 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0316 00:17:41.134110  123454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:41.134160  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:43.198894  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.064808781s)
	I0316 00:17:43.198926  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0316 00:17:43.198952  123454 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.198951  123454 ssh_runner.go:235] Completed: which crictl: (2.064761171s)
	I0316 00:17:43.199004  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.199051  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:43.112939  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.114446  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.613592  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.776235  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.777686  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.278307  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.110501  123454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.911421102s)
	I0316 00:17:47.110567  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0316 00:17:47.110695  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.911660704s)
	I0316 00:17:47.110728  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0316 00:17:47.110751  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:47.110703  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:47.110802  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:49.585079  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.474253503s)
	I0316 00:17:49.585109  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0316 00:17:49.585130  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.474308112s)
	I0316 00:17:49.585160  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0316 00:17:49.585134  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.585220  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.613704  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.615227  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:54.780467  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.736360  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.151102687s)
	I0316 00:17:51.736402  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0316 00:17:51.736463  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:51.736535  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:54.214591  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477993231s)
	I0316 00:17:54.214629  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0316 00:17:54.214658  123454 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:54.214728  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:55.171123  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0316 00:17:55.171204  123454 cache_images.go:123] Successfully loaded all cached images
	I0316 00:17:55.171213  123454 cache_images.go:92] duration metric: took 16.660103091s to LoadCachedImages
	I0316 00:17:55.171233  123454 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:17:55.171506  123454 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-238598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:55.171617  123454 ssh_runner.go:195] Run: crio config
	I0316 00:17:55.225056  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:17:55.225078  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:55.225089  123454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:55.225110  123454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-238598 NodeName:no-preload-238598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:17:55.225278  123454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-238598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:55.225371  123454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:17:55.237834  123454 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:55.237896  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:55.248733  123454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 00:17:55.266587  123454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:17:55.285283  123454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0316 00:17:55.303384  123454 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:55.307384  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:55.321079  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:55.453112  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:55.470573  123454 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598 for IP: 192.168.50.137
	I0316 00:17:55.470600  123454 certs.go:194] generating shared ca certs ...
	I0316 00:17:55.470623  123454 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:55.470808  123454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:55.470868  123454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:55.470906  123454 certs.go:256] generating profile certs ...
	I0316 00:17:55.471028  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.key
	I0316 00:17:55.471140  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key.0f2ae39d
	I0316 00:17:55.471195  123454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key
	I0316 00:17:55.471410  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:55.471463  123454 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:55.471483  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:55.471515  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:55.471542  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:55.471568  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:55.471612  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:55.472267  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:55.517524  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:54.115678  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:56.613196  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.277553  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:59.277770  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.567992  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:55.601463  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:55.637956  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:17:55.670063  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:55.694990  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:55.718916  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:17:55.744124  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:55.770051  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:55.794846  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:55.819060  123454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:55.836991  123454 ssh_runner.go:195] Run: openssl version
	I0316 00:17:55.844665  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:55.857643  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862493  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862561  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.868430  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:55.880551  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:55.891953  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896627  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896687  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.902539  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:55.915215  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:55.926699  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931120  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931172  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.936791  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:55.948180  123454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:55.953021  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:55.959107  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:55.965018  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:55.971159  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:55.977069  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:55.983062  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:55.989119  123454 kubeadm.go:391] StartCluster: {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:55.989201  123454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:55.989254  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.029128  123454 cri.go:89] found id: ""
	I0316 00:17:56.029209  123454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:56.040502  123454 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:56.040525  123454 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:56.040531  123454 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:56.040577  123454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:56.051843  123454 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:56.052995  123454 kubeconfig.go:125] found "no-preload-238598" server: "https://192.168.50.137:8443"
	I0316 00:17:56.055273  123454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:56.066493  123454 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0316 00:17:56.066547  123454 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:56.066564  123454 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:56.066641  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.111015  123454 cri.go:89] found id: ""
	I0316 00:17:56.111110  123454 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:56.131392  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:56.142638  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:56.142665  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:56.142725  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:56.154318  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:56.154418  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:56.166011  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:56.176688  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:56.176752  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:56.187776  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.198216  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:56.198285  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.208661  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:56.218587  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:56.218655  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:56.230247  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:56.241302  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:56.361423  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.731067  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.369591288s)
	I0316 00:17:57.731101  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.952457  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.044540  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.179796  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:58.179894  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.680635  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.180617  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.205383  123454 api_server.go:72] duration metric: took 1.025590775s to wait for apiserver process to appear ...
	I0316 00:17:59.205411  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:59.205436  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:59.205935  123454 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0316 00:17:59.706543  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:58.613340  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:00.618869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:01.914835  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.914865  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:01.914879  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:01.972138  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.972173  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:02.206540  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.219111  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.219165  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:02.705639  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.709820  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.709850  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:03.206513  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:03.216320  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:18:03.224237  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:18:03.224263  123454 api_server.go:131] duration metric: took 4.018845389s to wait for apiserver health ...
	I0316 00:18:03.224272  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:18:03.224279  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:18:03.225951  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.777309  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.777625  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.227382  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:18:03.245892  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:18:03.267423  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:18:03.281349  123454 system_pods.go:59] 8 kube-system pods found
	I0316 00:18:03.281387  123454 system_pods.go:61] "coredns-76f75df574-d2f6z" [3cd22981-0f83-4a60-9930-c103cfc2d2ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:18:03.281397  123454 system_pods.go:61] "etcd-no-preload-238598" [d98fa5b6-ad24-4c90-98c8-9e5b8f1a3250] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:18:03.281408  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [e7d7a5a0-9a4f-4df2-aaf7-44c36e5bd313] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:18:03.281420  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [a198865e-0ed5-40b6-8b10-a4fccdefa059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:18:03.281434  123454 system_pods.go:61] "kube-proxy-cjhzn" [6529873c-cb9d-42d8-991d-e450783b1707] Running
	I0316 00:18:03.281443  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [bfb373fb-ec78-4ef1-b92e-3a8af3f805a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:18:03.281457  123454 system_pods.go:61] "metrics-server-57f55c9bc5-hffvp" [4181fe7f-3e95-455b-a744-8f4dca7b870d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:18:03.281466  123454 system_pods.go:61] "storage-provisioner" [d568ae10-7b9c-4c98-8263-a09505227ac7] Running
	I0316 00:18:03.281485  123454 system_pods.go:74] duration metric: took 14.043103ms to wait for pod list to return data ...
	I0316 00:18:03.281501  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:18:03.284899  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:18:03.284923  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:18:03.284934  123454 node_conditions.go:105] duration metric: took 3.425812ms to run NodePressure ...
	I0316 00:18:03.284955  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:18:03.562930  123454 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568376  123454 kubeadm.go:733] kubelet initialised
	I0316 00:18:03.568402  123454 kubeadm.go:734] duration metric: took 5.44437ms waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568412  123454 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:18:03.574420  123454 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:03.113622  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.613724  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:07.614087  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.278238  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.776236  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.582284  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.081679  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.082343  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.113282  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.114515  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.776835  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.777258  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.778115  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.582099  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:13.082243  123454 pod_ready.go:92] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:13.082263  123454 pod_ready.go:81] duration metric: took 9.507817974s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:13.082271  123454 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:15.088733  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.613599  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:16.614876  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.280289  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.777434  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:17.089800  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.092413  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.092441  123454 pod_ready.go:81] duration metric: took 6.010161958s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.092453  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.097972  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.097996  123454 pod_ready.go:81] duration metric: took 5.533097ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.098008  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102186  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.102204  123454 pod_ready.go:81] duration metric: took 4.187939ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102213  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106692  123454 pod_ready.go:92] pod "kube-proxy-cjhzn" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.106712  123454 pod_ready.go:81] duration metric: took 4.492665ms for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106720  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111735  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.111754  123454 pod_ready.go:81] duration metric: took 5.027601ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111764  123454 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.113278  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.114061  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:22.276633  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:24.278807  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.119790  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.618664  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.115414  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.613572  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:26.778891  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:29.277585  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.619282  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.118484  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.121236  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.114043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.119153  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.614043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:31.778203  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.276424  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.618082  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.619339  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.614209  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.113521  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:36.279218  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:38.779161  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.118552  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.619543  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.614042  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.113784  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:41.278664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:43.777450  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.119118  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.119473  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.614102  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:47.112496  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:46.277664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.279095  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:46.619201  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.619302  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:49.113616  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.613449  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:18:50.777409  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:52.779497  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.278072  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.119041  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:53.121052  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:54.113699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:56.613686  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:57.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.277696  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.618835  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.118984  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.119379  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.614207  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:01.113795  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:02.281155  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.779663  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:02.618637  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.619492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:03.613777  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.114458  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:07.276601  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.277239  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.619784  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.118699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:08.613361  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.615062  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:11.277319  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.777280  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:11.119614  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.618997  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.113490  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:15.613530  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:17.613578  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.276204  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.277156  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.118717  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.618005  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:19.614161  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.112808  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:20.777843  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.778609  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.780571  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:20.618505  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.619290  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.118778  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.113901  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:26.115541  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:27.277159  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:29.277242  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:27.618996  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:30.118650  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:28.614101  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.114366  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:31.776661  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.778372  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:32.125130  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:34.619153  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.114785  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.116692  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:37.613605  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:36.276574  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.276784  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:36.619780  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.619966  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:39.614178  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.616246  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:40.779366  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.277656  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.279201  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.118560  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.120706  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:44.113022  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:46.114296  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.778494  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.277998  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.619070  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.622001  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.118739  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:48.114952  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.614794  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:52.776113  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.777687  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:52.119145  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.619675  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:53.113139  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:55.113961  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.613751  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.277412  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.277555  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.119685  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.618622  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.614914  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:02.113286  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:01.777542  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.278277  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:01.618756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.119973  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.113918  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.613434  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:06.278976  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.778022  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.124642  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.618968  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.613517  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.613699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.613997  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:11.277492  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.777429  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.619721  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.120185  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.114540  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:17.614281  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.781621  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.277078  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.277734  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.620224  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.118862  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.118920  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.117088  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.614917  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:22.779251  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.276842  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.118990  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:24.119699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.114563  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.614869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.277136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.777082  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:26.619354  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:28.619489  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.619807  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:32.117311  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:32.277582  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.778394  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.622010  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:33.119518  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:35.119736  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.613788  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.277007  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.776793  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:37.121196  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.619239  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:38.616664  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:41.112900  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:41.777952  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.276802  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:42.119128  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.119255  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:43.114941  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:45.614095  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:47.616615  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.277300  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.777275  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.119389  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.618309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.116327  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.614990  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:50.777563  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.778761  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.276863  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.619469  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:53.119593  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.116184  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:57.613355  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:57.776955  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.276381  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.619683  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:58.122772  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:59.616518  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.115379  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.613248  123537 pod_ready.go:81] duration metric: took 4m0.006848891s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:02.613273  123537 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:02.613280  123537 pod_ready.go:38] duration metric: took 4m5.267062496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:02.613297  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:02.613347  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:02.613393  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:02.670107  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:02.670139  123537 cri.go:89] found id: ""
	I0316 00:21:02.670149  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:02.670210  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.675144  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:02.675212  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:02.720695  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:02.720720  123537 cri.go:89] found id: ""
	I0316 00:21:02.720729  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:02.720790  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.725490  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:02.725570  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.276825  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.779811  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.617765  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.619210  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.619603  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.778908  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:02.778959  123537 cri.go:89] found id: ""
	I0316 00:21:02.778971  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:02.779028  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.784772  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:02.784864  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:02.830682  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:02.830709  123537 cri.go:89] found id: ""
	I0316 00:21:02.830719  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:02.830784  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.835733  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:02.835813  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:02.875862  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:02.875890  123537 cri.go:89] found id: ""
	I0316 00:21:02.875902  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:02.875967  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.880801  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:02.880857  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:02.921585  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:02.921611  123537 cri.go:89] found id: ""
	I0316 00:21:02.921622  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:02.921689  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.929521  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:02.929593  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.977621  123537 cri.go:89] found id: ""
	I0316 00:21:02.977646  123537 logs.go:276] 0 containers: []
	W0316 00:21:02.977657  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.977668  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:02.977723  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:03.020159  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.020186  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.020193  123537 cri.go:89] found id: ""
	I0316 00:21:03.020204  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:03.020274  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.025593  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.030718  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:03.030744  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:03.090141  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:03.090182  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:03.147416  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:03.147466  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:03.189686  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:03.189733  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:03.245980  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:03.246020  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.296494  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:03.296534  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:03.349602  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:03.349635  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:03.364783  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:03.364819  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:03.513917  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:03.513955  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:03.567916  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:03.567952  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:03.607620  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:03.607658  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:03.658683  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:03.658717  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.699797  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:03.699827  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:06.715440  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:06.733725  123537 api_server.go:72] duration metric: took 4m16.598062692s to wait for apiserver process to appear ...
	I0316 00:21:06.733759  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:06.733810  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:06.733868  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:06.775396  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:06.775431  123537 cri.go:89] found id: ""
	I0316 00:21:06.775442  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:06.775506  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.780448  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:06.780503  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:06.836927  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:06.836962  123537 cri.go:89] found id: ""
	I0316 00:21:06.836972  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:06.837025  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.841803  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:06.841869  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:06.887445  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:06.887470  123537 cri.go:89] found id: ""
	I0316 00:21:06.887479  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:06.887534  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.892112  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:06.892192  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:06.936614  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:06.936642  123537 cri.go:89] found id: ""
	I0316 00:21:06.936653  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:06.936717  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.943731  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:06.943799  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:06.986738  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:06.986764  123537 cri.go:89] found id: ""
	I0316 00:21:06.986774  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:06.986843  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.991555  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:06.991621  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:07.052047  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:07.052074  123537 cri.go:89] found id: ""
	I0316 00:21:07.052082  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:07.052133  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.057297  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:07.057358  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:07.104002  123537 cri.go:89] found id: ""
	I0316 00:21:07.104034  123537 logs.go:276] 0 containers: []
	W0316 00:21:07.104042  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:07.104049  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:07.104113  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:07.148540  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:07.148562  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:07.148566  123537 cri.go:89] found id: ""
	I0316 00:21:07.148572  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:07.148620  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.153502  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.157741  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:07.157770  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:07.197856  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:07.197889  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:07.654282  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:07.654324  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:07.708539  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:07.708579  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:07.725072  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:07.725104  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.277657  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.780721  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.121773  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.619756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.862465  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:07.862498  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:07.925812  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:07.925846  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:07.986121  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:07.986152  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:08.036774  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:08.036817  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:08.091902  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:08.091933  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:08.142096  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:08.142128  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:08.210747  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:08.210789  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:08.270225  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:08.270259  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:10.817112  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:21:10.822359  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:21:10.823955  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:10.823978  123537 api_server.go:131] duration metric: took 4.090210216s to wait for apiserver health ...
	I0316 00:21:10.823988  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:10.824019  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:10.824076  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:10.872487  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:10.872514  123537 cri.go:89] found id: ""
	I0316 00:21:10.872524  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:10.872590  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.877131  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:10.877197  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:10.916699  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:10.916728  123537 cri.go:89] found id: ""
	I0316 00:21:10.916737  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:10.916797  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.921114  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:10.921182  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:10.964099  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:10.964123  123537 cri.go:89] found id: ""
	I0316 00:21:10.964132  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:10.964191  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.968716  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:10.968788  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.008883  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.008909  123537 cri.go:89] found id: ""
	I0316 00:21:11.008919  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:11.008974  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.014068  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.014138  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.067209  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.067239  123537 cri.go:89] found id: ""
	I0316 00:21:11.067251  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:11.067315  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.072536  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.072663  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.119366  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.119399  123537 cri.go:89] found id: ""
	I0316 00:21:11.119411  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:11.119462  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.124502  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.124590  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.169458  123537 cri.go:89] found id: ""
	I0316 00:21:11.169494  123537 logs.go:276] 0 containers: []
	W0316 00:21:11.169505  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.169513  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:11.169576  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:11.218886  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:11.218923  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:11.218928  123537 cri.go:89] found id: ""
	I0316 00:21:11.218938  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:11.219002  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.223583  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.228729  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:11.228753  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:11.282781  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:11.282818  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:11.347330  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:11.347379  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.401191  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:11.401225  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.453126  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:11.453158  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.523058  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.523110  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.944108  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.944157  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:12.001558  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:12.001602  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:12.062833  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:12.062885  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:12.078726  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:12.078762  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:12.209248  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:12.209284  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:12.251891  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:12.251930  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:12.296240  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:12.296271  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:14.846244  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:14.846274  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.846279  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.846283  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.846287  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.846290  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.846294  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.846299  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.846302  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.846309  123537 system_pods.go:74] duration metric: took 4.022315588s to wait for pod list to return data ...
	I0316 00:21:14.846317  123537 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:14.848830  123537 default_sa.go:45] found service account: "default"
	I0316 00:21:14.848852  123537 default_sa.go:55] duration metric: took 2.529805ms for default service account to be created ...
	I0316 00:21:14.848859  123537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:14.861369  123537 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:14.861396  123537 system_pods.go:89] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.861401  123537 system_pods.go:89] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.861405  123537 system_pods.go:89] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.861409  123537 system_pods.go:89] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.861448  123537 system_pods.go:89] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.861456  123537 system_pods.go:89] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.861465  123537 system_pods.go:89] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.861470  123537 system_pods.go:89] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.861478  123537 system_pods.go:126] duration metric: took 12.614437ms to wait for k8s-apps to be running ...
	I0316 00:21:14.861488  123537 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:14.861534  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:14.879439  123537 system_svc.go:56] duration metric: took 17.934537ms WaitForService to wait for kubelet
	I0316 00:21:14.879484  123537 kubeadm.go:576] duration metric: took 4m24.743827748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:14.879523  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:14.882642  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:14.882673  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:14.882716  123537 node_conditions.go:105] duration metric: took 3.184841ms to run NodePressure ...
	I0316 00:21:14.882733  123537 start.go:240] waiting for startup goroutines ...
	I0316 00:21:14.882749  123537 start.go:245] waiting for cluster config update ...
	I0316 00:21:14.882789  123537 start.go:254] writing updated cluster config ...
	I0316 00:21:14.883119  123537 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:14.937804  123537 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:14.939886  123537 out.go:177] * Done! kubectl is now configured to use "embed-certs-666637" cluster and "default" namespace by default
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:12.278383  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.279769  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:12.124356  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.619164  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:16.777597  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.277188  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.119492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.119935  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.278136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:22.779721  123819 pod_ready.go:81] duration metric: took 4m0.010022344s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:22.779752  123819 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:22.779762  123819 pod_ready.go:38] duration metric: took 4m5.913207723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:22.779779  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:22.779814  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:22.779876  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:22.836022  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:22.836058  123819 cri.go:89] found id: ""
	I0316 00:21:22.836069  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:22.836131  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.841289  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:22.841362  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:22.883980  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:22.884007  123819 cri.go:89] found id: ""
	I0316 00:21:22.884018  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:22.884084  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.889352  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:22.889427  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:22.929947  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:22.929977  123819 cri.go:89] found id: ""
	I0316 00:21:22.929987  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:22.930033  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.935400  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:22.935485  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:22.975548  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:22.975580  123819 cri.go:89] found id: ""
	I0316 00:21:22.975598  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:22.975671  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.981916  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:22.981998  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.019925  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.019965  123819 cri.go:89] found id: ""
	I0316 00:21:23.019977  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:23.020046  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.024870  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.024960  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.068210  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.068241  123819 cri.go:89] found id: ""
	I0316 00:21:23.068253  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:23.068344  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.073492  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.073578  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.113267  123819 cri.go:89] found id: ""
	I0316 00:21:23.113301  123819 logs.go:276] 0 containers: []
	W0316 00:21:23.113311  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.113319  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:23.113382  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:23.160155  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:23.160175  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.160179  123819 cri.go:89] found id: ""
	I0316 00:21:23.160192  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:23.160241  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.165125  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.169508  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:23.169530  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.218749  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:23.218786  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.274140  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:23.274177  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.320515  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:23.320559  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:23.835119  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:23.835173  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:23.907635  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.907691  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.925071  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:23.925126  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:23.991996  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:23.992028  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:24.032865  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.032899  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.090947  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:24.090987  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:24.285862  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:24.285896  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:24.337983  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:24.338027  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:24.379626  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:24.379657  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:21.618894  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:24.122648  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:26.918844  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.938014  123819 api_server.go:72] duration metric: took 4m17.276244s to wait for apiserver process to appear ...
	I0316 00:21:26.938053  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:26.938095  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:26.938157  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:26.983515  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:26.983538  123819 cri.go:89] found id: ""
	I0316 00:21:26.983546  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:26.983595  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:26.989278  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:26.989341  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:27.039968  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.040000  123819 cri.go:89] found id: ""
	I0316 00:21:27.040009  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:27.040078  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.045617  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:27.045687  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:27.085920  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.085948  123819 cri.go:89] found id: ""
	I0316 00:21:27.085960  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:27.086029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.090911  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:27.090989  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:27.137289  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:27.137322  123819 cri.go:89] found id: ""
	I0316 00:21:27.137333  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:27.137393  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.141956  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:27.142031  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:27.180823  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.180845  123819 cri.go:89] found id: ""
	I0316 00:21:27.180854  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:27.180919  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.185439  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:27.185523  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:27.225775  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:27.225797  123819 cri.go:89] found id: ""
	I0316 00:21:27.225805  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:27.225854  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.230648  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:27.230717  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:27.269429  123819 cri.go:89] found id: ""
	I0316 00:21:27.269465  123819 logs.go:276] 0 containers: []
	W0316 00:21:27.269477  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:27.269485  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:27.269550  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:27.308288  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.308316  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.308321  123819 cri.go:89] found id: ""
	I0316 00:21:27.308329  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:27.308378  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.312944  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.317794  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:27.317829  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:27.364287  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:27.364323  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.419482  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:27.419521  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.468553  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:27.468585  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.513287  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:27.513320  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.561382  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:27.561426  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.601292  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:27.601325  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:27.656848  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:27.656902  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:27.796212  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:27.796245  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:28.246569  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:28.246611  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:28.302971  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:28.303015  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:28.359613  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:28.359645  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:28.375844  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:28.375877  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:26.124217  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:28.619599  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:30.921320  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:21:30.926064  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:21:30.927332  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:30.927353  123819 api_server.go:131] duration metric: took 3.989292523s to wait for apiserver health ...
	I0316 00:21:30.927361  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:30.927386  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:30.927438  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:30.975348  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:30.975376  123819 cri.go:89] found id: ""
	I0316 00:21:30.975389  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:30.975459  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:30.980128  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:30.980194  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:31.029534  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.029563  123819 cri.go:89] found id: ""
	I0316 00:21:31.029574  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:31.029627  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.034066  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:31.034149  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:31.073857  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.073884  123819 cri.go:89] found id: ""
	I0316 00:21:31.073892  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:31.073961  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.078421  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:31.078501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:31.117922  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.117951  123819 cri.go:89] found id: ""
	I0316 00:21:31.117964  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:31.118029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.122435  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:31.122501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:31.161059  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.161089  123819 cri.go:89] found id: ""
	I0316 00:21:31.161101  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:31.161155  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.165503  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:31.165572  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:31.207637  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.207669  123819 cri.go:89] found id: ""
	I0316 00:21:31.207679  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:31.207742  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.212296  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:31.212360  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:31.251480  123819 cri.go:89] found id: ""
	I0316 00:21:31.251519  123819 logs.go:276] 0 containers: []
	W0316 00:21:31.251530  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:31.251539  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:31.251608  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:31.296321  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.296345  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.296350  123819 cri.go:89] found id: ""
	I0316 00:21:31.296357  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:31.296414  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.302159  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.306501  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:31.306526  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.348347  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:31.348379  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.388542  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:31.388573  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:31.439926  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:31.439962  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:31.499674  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:31.499711  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:31.552720  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:31.552771  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.605281  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:31.605331  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.651964  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:31.651997  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.696113  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:31.696150  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.749712  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:31.749751  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.801476  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:31.801508  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:32.236105  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:32.236146  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:32.253815  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:32.253848  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:34.930730  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:34.930759  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.930763  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.930767  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.930772  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.930775  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.930778  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.930783  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.930788  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.930798  123819 system_pods.go:74] duration metric: took 4.003426137s to wait for pod list to return data ...
	I0316 00:21:34.930807  123819 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:34.933462  123819 default_sa.go:45] found service account: "default"
	I0316 00:21:34.933492  123819 default_sa.go:55] duration metric: took 2.674728ms for default service account to be created ...
	I0316 00:21:34.933500  123819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:34.939351  123819 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:34.939382  123819 system_pods.go:89] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.939393  123819 system_pods.go:89] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.939400  123819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.939406  123819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.939414  123819 system_pods.go:89] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.939420  123819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.939442  123819 system_pods.go:89] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.939454  123819 system_pods.go:89] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.939469  123819 system_pods.go:126] duration metric: took 5.962328ms to wait for k8s-apps to be running ...
	I0316 00:21:34.939482  123819 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:34.939539  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:34.958068  123819 system_svc.go:56] duration metric: took 18.572929ms WaitForService to wait for kubelet
	I0316 00:21:34.958108  123819 kubeadm.go:576] duration metric: took 4m25.296341727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:34.958130  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:34.962603  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:34.962629  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:34.962641  123819 node_conditions.go:105] duration metric: took 4.505615ms to run NodePressure ...
	I0316 00:21:34.962657  123819 start.go:240] waiting for startup goroutines ...
	I0316 00:21:34.962667  123819 start.go:245] waiting for cluster config update ...
	I0316 00:21:34.962690  123819 start.go:254] writing updated cluster config ...
	I0316 00:21:34.963009  123819 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:35.015774  123819 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:35.019103  123819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-313436" cluster and "default" namespace by default
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:21:31.121456  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:33.122437  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:35.618906  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:37.619223  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:40.120743  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:42.619309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:44.619544  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:47.120179  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:49.619419  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:52.124510  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:54.125147  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:56.621651  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:59.120895  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:01.618287  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:03.620297  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:06.119870  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:08.122618  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:10.619464  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.121381  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:15.619590  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.122483  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:19.112568  123454 pod_ready.go:81] duration metric: took 4m0.000767313s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	E0316 00:22:19.112600  123454 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0316 00:22:19.112621  123454 pod_ready.go:38] duration metric: took 4m15.544198169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:22:19.112652  123454 kubeadm.go:591] duration metric: took 4m23.072115667s to restartPrimaryControlPlane
	W0316 00:22:19.112713  123454 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:22:19.112769  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:51.249327  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.136527598s)
	I0316 00:22:51.249406  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:22:51.268404  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:22:51.280832  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:22:51.292639  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:22:51.292661  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:22:51.292712  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:22:51.303272  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:22:51.303347  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:22:51.313854  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:22:51.324290  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:22:51.324361  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:22:51.334879  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.345302  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:22:51.345382  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.355682  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:22:51.366601  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:22:51.366660  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:22:51.377336  123454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:22:51.594624  123454 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:00.473055  123454 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0316 00:23:00.473140  123454 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:00.473255  123454 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:00.473415  123454 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:00.473551  123454 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:00.473682  123454 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:00.475591  123454 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:00.475704  123454 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:00.475803  123454 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:00.475905  123454 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:00.476001  123454 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:00.476100  123454 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:00.476190  123454 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:00.476281  123454 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:00.476378  123454 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:00.476516  123454 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:00.476647  123454 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:00.476715  123454 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:00.476801  123454 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:00.476879  123454 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:00.476968  123454 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0316 00:23:00.477042  123454 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:00.477166  123454 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:00.477253  123454 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:00.477378  123454 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:00.477480  123454 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:00.479084  123454 out.go:204]   - Booting up control plane ...
	I0316 00:23:00.479206  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:00.479332  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:00.479440  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:00.479541  123454 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:00.479625  123454 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:00.479697  123454 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:00.479874  123454 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:23:00.479994  123454 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003092 seconds
	I0316 00:23:00.480139  123454 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:23:00.480339  123454 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:23:00.480445  123454 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:23:00.480687  123454 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-238598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:23:00.480789  123454 kubeadm.go:309] [bootstrap-token] Using token: aspuu8.i4yhgkjx7e43mgmn
	I0316 00:23:00.482437  123454 out.go:204]   - Configuring RBAC rules ...
	I0316 00:23:00.482568  123454 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:23:00.482697  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:23:00.482917  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:23:00.483119  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:23:00.483283  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:23:00.483406  123454 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:23:00.483582  123454 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:23:00.483653  123454 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:23:00.483714  123454 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:23:00.483720  123454 kubeadm.go:309] 
	I0316 00:23:00.483815  123454 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:23:00.483833  123454 kubeadm.go:309] 
	I0316 00:23:00.483973  123454 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:23:00.483986  123454 kubeadm.go:309] 
	I0316 00:23:00.484014  123454 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:23:00.484119  123454 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:23:00.484200  123454 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:23:00.484211  123454 kubeadm.go:309] 
	I0316 00:23:00.484283  123454 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:23:00.484288  123454 kubeadm.go:309] 
	I0316 00:23:00.484360  123454 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:23:00.484366  123454 kubeadm.go:309] 
	I0316 00:23:00.484452  123454 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:23:00.484560  123454 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:23:00.484657  123454 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:23:00.484666  123454 kubeadm.go:309] 
	I0316 00:23:00.484798  123454 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:23:00.484920  123454 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:23:00.484932  123454 kubeadm.go:309] 
	I0316 00:23:00.485053  123454 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485196  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:23:00.485227  123454 kubeadm.go:309] 	--control-plane 
	I0316 00:23:00.485241  123454 kubeadm.go:309] 
	I0316 00:23:00.485357  123454 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:23:00.485367  123454 kubeadm.go:309] 
	I0316 00:23:00.485488  123454 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485646  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:23:00.485661  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:23:00.485671  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:23:00.487417  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:23:00.489063  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:23:00.526147  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:23:00.571796  123454 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-238598 minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=no-preload-238598 minikube.k8s.io/primary=true
	I0316 00:23:00.892908  123454 ops.go:34] apiserver oom_adj: -16
	I0316 00:23:00.892994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.394077  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.893097  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.393114  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.893994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.393930  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.893428  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.393822  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.893810  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.393999  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.893998  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.393104  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.893725  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.393873  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.893432  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.394054  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.893595  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.393109  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.893621  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.393322  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.894024  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.393711  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.893465  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.393059  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.497890  123454 kubeadm.go:1107] duration metric: took 11.926069028s to wait for elevateKubeSystemPrivileges
	W0316 00:23:12.497951  123454 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:23:12.497962  123454 kubeadm.go:393] duration metric: took 5m16.508852945s to StartCluster
	I0316 00:23:12.497988  123454 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.498139  123454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:23:12.500632  123454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.500995  123454 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:23:12.502850  123454 out.go:177] * Verifying Kubernetes components...
	I0316 00:23:12.501089  123454 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:23:12.501233  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:23:12.504432  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:23:12.504443  123454 addons.go:69] Setting storage-provisioner=true in profile "no-preload-238598"
	I0316 00:23:12.504491  123454 addons.go:234] Setting addon storage-provisioner=true in "no-preload-238598"
	I0316 00:23:12.504502  123454 addons.go:69] Setting default-storageclass=true in profile "no-preload-238598"
	I0316 00:23:12.504515  123454 addons.go:69] Setting metrics-server=true in profile "no-preload-238598"
	I0316 00:23:12.504526  123454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-238598"
	I0316 00:23:12.504541  123454 addons.go:234] Setting addon metrics-server=true in "no-preload-238598"
	W0316 00:23:12.504551  123454 addons.go:243] addon metrics-server should already be in state true
	I0316 00:23:12.504582  123454 host.go:66] Checking if "no-preload-238598" exists ...
	W0316 00:23:12.504505  123454 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:23:12.504656  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.504996  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505012  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.505013  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505229  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.521634  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0316 00:23:12.521698  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0316 00:23:12.522283  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522377  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522836  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.522861  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.522990  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.523032  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.523203  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523375  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523737  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.523758  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524232  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.524277  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524695  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0316 00:23:12.525112  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.525610  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.525637  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.526025  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.526218  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.530010  123454 addons.go:234] Setting addon default-storageclass=true in "no-preload-238598"
	W0316 00:23:12.530029  123454 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:23:12.530053  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.530277  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.530315  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.540310  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0316 00:23:12.545850  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.545966  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0316 00:23:12.546335  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.546740  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.546761  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.547035  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.547232  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.548605  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.548626  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.549001  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.549058  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0316 00:23:12.549268  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.549323  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.549454  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.551419  123454 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:23:12.549975  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.551115  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.553027  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:23:12.553050  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:23:12.553074  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.553082  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.554948  123454 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:23:12.553404  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.556096  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556544  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.556568  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556640  123454 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.556660  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:23:12.556679  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.556769  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.557150  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.557176  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.557398  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.557600  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.557886  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.560220  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560555  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.560582  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560759  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.560982  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.561157  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.561318  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.574877  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0316 00:23:12.575802  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.576313  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.576337  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.576640  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.577015  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.578483  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.578814  123454 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.578835  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:23:12.578856  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.581832  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582439  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.582454  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.582465  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582635  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.582819  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.582969  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.729051  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:23:12.747162  123454 node_ready.go:35] waiting up to 6m0s for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.759957  123454 node_ready.go:49] node "no-preload-238598" has status "Ready":"True"
	I0316 00:23:12.759992  123454 node_ready.go:38] duration metric: took 12.79378ms for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.760006  123454 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.772201  123454 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795626  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.795660  123454 pod_ready.go:81] duration metric: took 23.429082ms for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795674  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808661  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.808688  123454 pod_ready.go:81] duration metric: took 13.006568ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808699  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821578  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.821613  123454 pod_ready.go:81] duration metric: took 12.904651ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821627  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.832585  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:23:12.832616  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:23:12.838375  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.838404  123454 pod_ready.go:81] duration metric: took 16.768452ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.838415  123454 pod_ready.go:38] duration metric: took 78.396172ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.838435  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:23:12.838522  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:23:12.889063  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.907225  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.924533  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:23:12.924565  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:23:12.947224  123454 api_server.go:72] duration metric: took 446.183679ms to wait for apiserver process to appear ...
	I0316 00:23:12.947257  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:23:12.947281  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:23:12.975463  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:12.975495  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:23:13.023702  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:23:13.039598  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:23:13.039638  123454 api_server.go:131] duration metric: took 92.372403ms to wait for apiserver health ...
	I0316 00:23:13.039649  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:23:13.069937  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:13.141358  123454 system_pods.go:59] 5 kube-system pods found
	I0316 00:23:13.141387  123454 system_pods.go:61] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.141391  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.141397  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.141400  123454 system_pods.go:61] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending
	I0316 00:23:13.141404  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.141411  123454 system_pods.go:74] duration metric: took 101.754765ms to wait for pod list to return data ...
	I0316 00:23:13.141419  123454 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:23:13.200153  123454 default_sa.go:45] found service account: "default"
	I0316 00:23:13.200193  123454 default_sa.go:55] duration metric: took 58.765381ms for default service account to be created ...
	I0316 00:23:13.200205  123454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:23:13.381398  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381431  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.381771  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.381825  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.381840  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.381849  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381862  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.382154  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.382159  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.382189  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.383303  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.383345  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.383353  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending
	I0316 00:23:13.383360  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.383368  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.383374  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.383384  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.383396  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.383440  123454 retry.go:31] will retry after 221.286986ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.408809  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.408839  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.409146  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.409191  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.409195  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.612171  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.612205  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612212  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612221  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.612226  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.612230  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.612236  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.612239  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.612260  123454 retry.go:31] will retry after 311.442515ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.934136  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.934170  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934177  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934185  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.934191  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.934197  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.934204  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.934210  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.934234  123454 retry.go:31] will retry after 453.147474ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.343055  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.435784176s)
	I0316 00:23:14.343123  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343139  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343497  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343523  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.343540  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343554  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343800  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.343876  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343895  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.404681  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.404725  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404738  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404748  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.404758  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.404767  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.404777  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.404790  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.404810  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.404821  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending
	I0316 00:23:14.404846  123454 retry.go:31] will retry after 464.575803ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.447649  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.377663696s)
	I0316 00:23:14.447706  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.447724  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448062  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448083  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448092  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.448100  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448367  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.448367  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448394  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448407  123454 addons.go:470] Verifying addon metrics-server=true in "no-preload-238598"
	I0316 00:23:14.450675  123454 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0316 00:23:14.452378  123454 addons.go:505] duration metric: took 1.951301533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0316 00:23:14.888167  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.888206  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:14.888219  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.888226  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.888236  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.888243  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.888252  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.888260  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.888292  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.888301  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:14.888325  123454 retry.go:31] will retry after 490.515879ms: missing components: kube-proxy
	I0316 00:23:15.389667  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:15.389694  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:15.389700  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Running
	I0316 00:23:15.389704  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:15.389708  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:15.389712  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:15.389716  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Running
	I0316 00:23:15.389721  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:15.389728  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:15.389735  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:15.389745  123454 system_pods.go:126] duration metric: took 2.189532563s to wait for k8s-apps to be running ...
	I0316 00:23:15.389757  123454 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:23:15.389805  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:15.409241  123454 system_svc.go:56] duration metric: took 19.469575ms WaitForService to wait for kubelet
	I0316 00:23:15.409273  123454 kubeadm.go:576] duration metric: took 2.908240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:23:15.409292  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:23:15.412530  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:23:15.412559  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:23:15.412570  123454 node_conditions.go:105] duration metric: took 3.272979ms to run NodePressure ...
	I0316 00:23:15.412585  123454 start.go:240] waiting for startup goroutines ...
	I0316 00:23:15.412594  123454 start.go:245] waiting for cluster config update ...
	I0316 00:23:15.412608  123454 start.go:254] writing updated cluster config ...
	I0316 00:23:15.412923  123454 ssh_runner.go:195] Run: rm -f paused
	I0316 00:23:15.468245  123454 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 00:23:15.470311  123454 out.go:177] * Done! kubectl is now configured to use "no-preload-238598" cluster and "default" namespace by default
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 
	
	
	==> CRI-O <==
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.159184021Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1d60f13-1f13-49e0-aa80-0c05393d8f94 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.159395686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1d60f13-1f13-49e0-aa80-0c05393d8f94 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.201531751Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0e428c8-9353-4651-8aa4-1454ce7c7f76 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.201690311Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0e428c8-9353-4651-8aa4-1454ce7c7f76 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.203537623Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=25ac8b34-7ecd-49b6-925b-99ec494bbea6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.204146664Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549037204119302,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=25ac8b34-7ecd-49b6-925b-99ec494bbea6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.204789055Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=402b00b1-56d4-41ae-acc3-e105052bb4f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.204840723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=402b00b1-56d4-41ae-acc3-e105052bb4f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.205024787Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=402b00b1-56d4-41ae-acc3-e105052bb4f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.230117513Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=b61335ea-f4e7-44d4-bcc0-1a618044c288 name=/runtime.v1.RuntimeService/Status
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.230205238Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=b61335ea-f4e7-44d4-bcc0-1a618044c288 name=/runtime.v1.RuntimeService/Status
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.249041367Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b72c1189-1560-4279-8d6c-add1ac6aaf47 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.249145782Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b72c1189-1560-4279-8d6c-add1ac6aaf47 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.250837260Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=44a6e950-f572-40b8-9066-89ed7febb759 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.251415348Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549037251386750,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=44a6e950-f572-40b8-9066-89ed7febb759 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.252089857Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b4ef36c-064b-47b7-be43-55d8cbd72344 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.252141629Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b4ef36c-064b-47b7-be43-55d8cbd72344 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.252317251Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b4ef36c-064b-47b7-be43-55d8cbd72344 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.293451746Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=59649807-5262-4103-9f06-acc0e2f07b5c name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.293521511Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=59649807-5262-4103-9f06-acc0e2f07b5c name=/runtime.v1.RuntimeService/Version
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.295301700Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=922b5d24-8126-4b34-a529-dc24c7f6172f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.295976015Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549037295950956,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=922b5d24-8126-4b34-a529-dc24c7f6172f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.296771651Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5dc509da-8e63-4db0-ba8e-08921ad0995b name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.296868745Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5dc509da-8e63-4db0-ba8e-08921ad0995b name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:30:37 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:30:37.297051122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5dc509da-8e63-4db0-ba8e-08921ad0995b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	663378c6a7e6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   85449f9cd07fb       storage-provisioner
	28ae448dbfc1d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   4ca8689dbd875       busybox
	9d8b76dc25828       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   e25bc5b697075       coredns-5dd5756b68-w9fx2
	4ed399796d792       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   85449f9cd07fb       storage-provisioner
	81911669b0855       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago      Running             kube-proxy                1                   109211d3e5b05       kube-proxy-btmmm
	472e7252cc27d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   ce63475bfdf79       etcd-default-k8s-diff-port-313436
	1d277e87ef306       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      13 minutes ago      Running             kube-controller-manager   1                   0d85c2214b0b6       kube-controller-manager-default-k8s-diff-port-313436
	06a79188858d0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago      Running             kube-scheduler            1                   355142258a647       kube-scheduler-default-k8s-diff-port-313436
	1ea844db70263       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      13 minutes ago      Running             kube-apiserver            1                   88bf54752601c       kube-apiserver-default-k8s-diff-port-313436
	
	
	==> coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55130 - 21089 "HINFO IN 5248382490005511924.3970797499207171790. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020344801s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-313436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-313436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=default-k8s-diff-port-313436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_09_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-313436
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:30:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:27:48 +0000   Sat, 16 Mar 2024 00:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:27:48 +0000   Sat, 16 Mar 2024 00:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:27:48 +0000   Sat, 16 Mar 2024 00:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:27:48 +0000   Sat, 16 Mar 2024 00:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.198
	  Hostname:    default-k8s-diff-port-313436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 946b5a3986d64627993d563dfdbf7c19
	  System UUID:                946b5a39-86d6-4627-993d-563dfdbf7c19
	  Boot ID:                    14dacfab-6c8c-4adf-8510-4946d093b8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-5dd5756b68-w9fx2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-313436                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-313436             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-313436    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-btmmm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-313436             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 metrics-server-57f55c9bc5-cm878                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-313436 event: Registered Node default-k8s-diff-port-313436 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-313436 event: Registered Node default-k8s-diff-port-313436 in Controller
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053448] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040276] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.665411] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.585502] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.646602] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.108325] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.061948] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067209] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.219262] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.160171] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.269550] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +5.232409] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +0.076050] kauditd_printk_skb: 130 callbacks suppressed
	[Mar16 00:17] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +5.607762] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.459473] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +3.256275] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.798809] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] <==
	{"level":"info","ts":"2024-03-16T00:17:04.859333Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:17:04.860786Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:17:04.860852Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:17:04.861819Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:17:21.60248Z","caller":"traceutil/trace.go:171","msg":"trace[1083435639] transaction","detail":"{read_only:false; response_revision:590; number_of_response:1; }","duration":"694.253928ms","start":"2024-03-16T00:17:20.908174Z","end":"2024-03-16T00:17:21.602428Z","steps":["trace[1083435639] 'process raft request'  (duration: 694.065261ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:21.603491Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:20.908159Z","time spent":"694.607882ms","remote":"127.0.0.1:55158","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5418,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" mod_revision:488 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" value_size:5350 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" > >"}
	{"level":"warn","ts":"2024-03-16T00:17:21.848161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.007971ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14848242921832230001 > lease_revoke:<id:4e0f8e449784fde7>","response":"size:28"}
	{"level":"info","ts":"2024-03-16T00:17:21.848514Z","caller":"traceutil/trace.go:171","msg":"trace[855690415] linearizableReadLoop","detail":"{readStateIndex:622; appliedIndex:621; }","duration":"452.287629ms","start":"2024-03-16T00:17:21.396211Z","end":"2024-03-16T00:17:21.848499Z","steps":["trace[855690415] 'read index received'  (duration: 206.74707ms)","trace[855690415] 'applied index is now lower than readState.Index'  (duration: 245.538999ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:17:21.848894Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"452.625364ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" ","response":"range_response_count:1 size:5433"}
	{"level":"info","ts":"2024-03-16T00:17:21.849036Z","caller":"traceutil/trace.go:171","msg":"trace[1197727549] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-313436; range_end:; response_count:1; response_revision:590; }","duration":"452.845094ms","start":"2024-03-16T00:17:21.396176Z","end":"2024-03-16T00:17:21.849021Z","steps":["trace[1197727549] 'agreement among raft nodes before linearized reading'  (duration: 452.59302ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:21.849264Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:21.396136Z","time spent":"453.001348ms","remote":"127.0.0.1:55158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5456,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" "}
	{"level":"warn","ts":"2024-03-16T00:17:21.849404Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"407.506339ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-16T00:17:21.849471Z","caller":"traceutil/trace.go:171","msg":"trace[701525151] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:590; }","duration":"407.576387ms","start":"2024-03-16T00:17:21.441882Z","end":"2024-03-16T00:17:21.849458Z","steps":["trace[701525151] 'agreement among raft nodes before linearized reading'  (duration: 407.397718ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:21.849559Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"240.75245ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" ","response":"range_response_count:1 size:5433"}
	{"level":"info","ts":"2024-03-16T00:17:21.849681Z","caller":"traceutil/trace.go:171","msg":"trace[278024864] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-313436; range_end:; response_count:1; response_revision:590; }","duration":"240.882411ms","start":"2024-03-16T00:17:21.608789Z","end":"2024-03-16T00:17:21.849672Z","steps":["trace[278024864] 'agreement among raft nodes before linearized reading'  (duration: 240.726806ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:21.849733Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:21.441865Z","time spent":"407.779557ms","remote":"127.0.0.1:54940","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"info","ts":"2024-03-16T00:17:22.229742Z","caller":"traceutil/trace.go:171","msg":"trace[1562147951] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"333.529655ms","start":"2024-03-16T00:17:21.896193Z","end":"2024-03-16T00:17:22.229722Z","steps":["trace[1562147951] 'read index received'  (duration: 332.202486ms)","trace[1562147951] 'applied index is now lower than readState.Index'  (duration: 1.326409ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-16T00:17:22.229901Z","caller":"traceutil/trace.go:171","msg":"trace[251114440] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"369.524893ms","start":"2024-03-16T00:17:21.860364Z","end":"2024-03-16T00:17:22.229889Z","steps":["trace[251114440] 'process raft request'  (duration: 368.084543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:22.229924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.727223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" ","response":"range_response_count:1 size:5261"}
	{"level":"info","ts":"2024-03-16T00:17:22.230099Z","caller":"traceutil/trace.go:171","msg":"trace[2105319215] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-313436; range_end:; response_count:1; response_revision:591; }","duration":"333.915651ms","start":"2024-03-16T00:17:21.89617Z","end":"2024-03-16T00:17:22.230086Z","steps":["trace[2105319215] 'agreement among raft nodes before linearized reading'  (duration: 333.698748ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:22.230188Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:21.896156Z","time spent":"334.01955ms","remote":"127.0.0.1:55158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5284,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" "}
	{"level":"warn","ts":"2024-03-16T00:17:22.230008Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:21.860348Z","time spent":"369.614921ms","remote":"127.0.0.1:55158","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5246,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" mod_revision:590 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" value_size:5178 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" > >"}
	{"level":"info","ts":"2024-03-16T00:27:04.896074Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":844}
	{"level":"info","ts":"2024-03-16T00:27:04.89829Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":844,"took":"1.560917ms","hash":3091162621}
	{"level":"info","ts":"2024-03-16T00:27:04.898364Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3091162621,"revision":844,"compact-revision":-1}
	
	
	==> kernel <==
	 00:30:37 up 13 min,  0 users,  load average: 0.20, 0.13, 0.09
	Linux default-k8s-diff-port-313436 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] <==
	I0316 00:27:06.351307       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:27:07.351804       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:07.351945       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:27:07.351989       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:27:07.351905       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:07.352105       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:27:07.353288       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:28:06.296667       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:28:07.352330       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:28:07.352431       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:28:07.352458       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:28:07.353532       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:28:07.353745       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:28:07.353776       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:29:06.296171       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0316 00:30:06.296160       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:30:07.353301       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:30:07.353447       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:30:07.353473       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:30:07.354465       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:30:07.354659       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:30:07.354692       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] <==
	I0316 00:24:49.415024       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:25:18.919757       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:25:19.424749       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:25:48.925564       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:25:49.434667       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:26:18.931542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:26:19.446534       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:26:48.937425       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:26:49.454429       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:27:18.944203       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:27:19.463062       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:27:48.949810       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:27:49.472262       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:28:08.991294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="749.617µs"
	E0316 00:28:18.956837       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:28:19.480377       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:28:23.987049       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="108.351µs"
	E0316 00:28:48.961877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:28:49.489103       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:29:18.967403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:29:19.497404       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:29:48.972730       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:29:49.505027       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:30:18.981304       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:30:19.513359       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] <==
	I0316 00:17:07.649911       1 server_others.go:69] "Using iptables proxy"
	I0316 00:17:07.660306       1 node.go:141] Successfully retrieved node IP: 192.168.72.198
	I0316 00:17:07.700454       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:17:07.700494       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:17:07.703213       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:17:07.703275       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:17:07.703468       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:17:07.703496       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:17:07.704339       1 config.go:188] "Starting service config controller"
	I0316 00:17:07.704391       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:17:07.704412       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:17:07.704416       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:17:07.704923       1 config.go:315] "Starting node config controller"
	I0316 00:17:07.704955       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:17:07.805545       1 shared_informer.go:318] Caches are synced for node config
	I0316 00:17:07.805647       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:17:07.805742       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] <==
	I0316 00:17:03.986934       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:17:06.381764       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:17:06.383674       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:17:06.383791       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:17:06.383820       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:17:06.399284       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:17:06.399390       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:17:06.401127       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:17:06.401220       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:17:06.405807       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:17:06.401232       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:17:06.508522       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:28:02 default-k8s-diff-port-313436 kubelet[905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:28:02 default-k8s-diff-port-313436 kubelet[905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:28:02 default-k8s-diff-port-313436 kubelet[905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:28:02 default-k8s-diff-port-313436 kubelet[905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:28:08 default-k8s-diff-port-313436 kubelet[905]: E0316 00:28:08.971255     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:28:23 default-k8s-diff-port-313436 kubelet[905]: E0316 00:28:23.972133     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:28:37 default-k8s-diff-port-313436 kubelet[905]: E0316 00:28:37.970360     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:28:52 default-k8s-diff-port-313436 kubelet[905]: E0316 00:28:52.969476     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:29:02 default-k8s-diff-port-313436 kubelet[905]: E0316 00:29:02.002328     905 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:29:02 default-k8s-diff-port-313436 kubelet[905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:29:02 default-k8s-diff-port-313436 kubelet[905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:29:02 default-k8s-diff-port-313436 kubelet[905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:29:02 default-k8s-diff-port-313436 kubelet[905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:29:03 default-k8s-diff-port-313436 kubelet[905]: E0316 00:29:03.970103     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:29:17 default-k8s-diff-port-313436 kubelet[905]: E0316 00:29:17.970679     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:29:31 default-k8s-diff-port-313436 kubelet[905]: E0316 00:29:31.970267     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:29:45 default-k8s-diff-port-313436 kubelet[905]: E0316 00:29:45.970850     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:29:59 default-k8s-diff-port-313436 kubelet[905]: E0316 00:29:59.970876     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:30:02 default-k8s-diff-port-313436 kubelet[905]: E0316 00:30:02.000886     905 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:30:02 default-k8s-diff-port-313436 kubelet[905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:30:02 default-k8s-diff-port-313436 kubelet[905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:30:02 default-k8s-diff-port-313436 kubelet[905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:30:02 default-k8s-diff-port-313436 kubelet[905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:30:14 default-k8s-diff-port-313436 kubelet[905]: E0316 00:30:14.970625     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:30:25 default-k8s-diff-port-313436 kubelet[905]: E0316 00:30:25.969995     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	
	
	==> storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] <==
	I0316 00:17:07.530247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0316 00:17:37.533960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] <==
	I0316 00:17:38.318292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:17:38.326976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:17:38.327078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:17:38.337969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:17:38.338425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313436_765ba5e8-5a3e-47ea-bb2a-0184565770b1!
	I0316 00:17:38.340918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d337c78-eae8-4f4c-898f-77886111425a", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-313436_765ba5e8-5a3e-47ea-bb2a-0184565770b1 became leader
	I0316 00:17:38.439665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313436_765ba5e8-5a3e-47ea-bb2a-0184565770b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-cm878
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 describe pod metrics-server-57f55c9bc5-cm878
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-313436 describe pod metrics-server-57f55c9bc5-cm878: exit status 1 (66.303445ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cm878" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-313436 describe pod metrics-server-57f55c9bc5-cm878: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0316 00:23:58.905900   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0316 00:24:08.402108   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-238598 -n no-preload-238598
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-16 00:32:16.077551771 +0000 UTC m=+5766.500403036
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-238598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-238598 logs -n 25: (2.143068704s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-313368 ssh                                | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:13:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:00.891560  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:13:06.971548  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:10.043616  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:16.123615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:19.195641  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:25.275569  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:28.347627  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:34.427628  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:37.499621  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:43.579636  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:46.651611  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:52.731602  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:55.803555  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:01.883545  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:04.955579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:11.035610  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:14.107615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:20.187606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:23.259572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:29.339575  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:32.411617  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:38.491587  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:41.563659  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:47.643582  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:50.715565  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:56.795596  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:59.867614  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:05.947572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:09.019585  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:15.099606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:18.171563  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:24.251589  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:27.323592  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:33.403599  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:36.475652  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:42.555600  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:45.627577  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:51.707630  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:54.779625  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:00.859579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:03.931626  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:10.011762  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:13.083615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:16.087122  123537 start.go:364] duration metric: took 4m28.254030119s to acquireMachinesLock for "embed-certs-666637"
	I0316 00:16:16.087211  123537 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:16.087224  123537 fix.go:54] fixHost starting: 
	I0316 00:16:16.087613  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:16.087653  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:16.102371  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0316 00:16:16.102813  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:16.103305  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:16.103343  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:16.103693  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:16.103874  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:16.104010  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:16.105752  123537 fix.go:112] recreateIfNeeded on embed-certs-666637: state=Stopped err=<nil>
	I0316 00:16:16.105780  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	W0316 00:16:16.105959  123537 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:16.107881  123537 out.go:177] * Restarting existing kvm2 VM for "embed-certs-666637" ...
	I0316 00:16:16.109056  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Start
	I0316 00:16:16.109231  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring networks are active...
	I0316 00:16:16.110036  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network default is active
	I0316 00:16:16.110372  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network mk-embed-certs-666637 is active
	I0316 00:16:16.110782  123537 main.go:141] libmachine: (embed-certs-666637) Getting domain xml...
	I0316 00:16:16.111608  123537 main.go:141] libmachine: (embed-certs-666637) Creating domain...
	I0316 00:16:17.296901  123537 main.go:141] libmachine: (embed-certs-666637) Waiting to get IP...
	I0316 00:16:17.297746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.298129  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.298317  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.298111  124543 retry.go:31] will retry after 269.98852ms: waiting for machine to come up
	I0316 00:16:17.569866  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.570322  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.570349  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.570278  124543 retry.go:31] will retry after 244.711835ms: waiting for machine to come up
	I0316 00:16:16.084301  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:16.084359  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084699  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:16:16.084726  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084970  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:16:16.086868  123454 machine.go:97] duration metric: took 4m35.39093995s to provisionDockerMachine
	I0316 00:16:16.087007  123454 fix.go:56] duration metric: took 4m35.413006758s for fixHost
	I0316 00:16:16.087038  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 4m35.413320023s
	W0316 00:16:16.087068  123454 start.go:713] error starting host: provision: host is not running
	W0316 00:16:16.087236  123454 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0316 00:16:16.087249  123454 start.go:728] Will try again in 5 seconds ...
	I0316 00:16:17.816747  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.817165  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.817196  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.817109  124543 retry.go:31] will retry after 326.155242ms: waiting for machine to come up
	I0316 00:16:18.144611  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.145047  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.145081  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.145000  124543 retry.go:31] will retry after 464.805158ms: waiting for machine to come up
	I0316 00:16:18.611746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.612105  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.612140  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.612039  124543 retry.go:31] will retry after 593.718495ms: waiting for machine to come up
	I0316 00:16:19.208024  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.208444  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.208476  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.208379  124543 retry.go:31] will retry after 772.07702ms: waiting for machine to come up
	I0316 00:16:19.982326  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.982800  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.982827  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.982706  124543 retry.go:31] will retry after 846.887476ms: waiting for machine to come up
	I0316 00:16:20.830726  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:20.831144  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:20.831168  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:20.831098  124543 retry.go:31] will retry after 1.274824907s: waiting for machine to come up
	I0316 00:16:22.107855  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:22.108252  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:22.108278  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:22.108209  124543 retry.go:31] will retry after 1.41217789s: waiting for machine to come up
	I0316 00:16:21.088013  123454 start.go:360] acquireMachinesLock for no-preload-238598: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:23.522725  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:23.523143  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:23.523179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:23.523094  124543 retry.go:31] will retry after 1.567285216s: waiting for machine to come up
	I0316 00:16:25.092539  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:25.092954  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:25.092981  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:25.092941  124543 retry.go:31] will retry after 2.260428679s: waiting for machine to come up
	I0316 00:16:27.354650  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:27.355051  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:27.355082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:27.354990  124543 retry.go:31] will retry after 2.402464465s: waiting for machine to come up
	I0316 00:16:29.758774  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:29.759220  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:29.759253  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:29.759176  124543 retry.go:31] will retry after 3.63505234s: waiting for machine to come up
	I0316 00:16:34.648552  123819 start.go:364] duration metric: took 4m4.062008179s to acquireMachinesLock for "default-k8s-diff-port-313436"
	I0316 00:16:34.648628  123819 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:34.648638  123819 fix.go:54] fixHost starting: 
	I0316 00:16:34.649089  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:34.649134  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:34.667801  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0316 00:16:34.668234  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:34.668737  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:16:34.668768  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:34.669123  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:34.669349  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:34.669552  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:16:34.671100  123819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313436: state=Stopped err=<nil>
	I0316 00:16:34.671139  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	W0316 00:16:34.671297  123819 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:34.673738  123819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-313436" ...
	I0316 00:16:34.675120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Start
	I0316 00:16:34.675292  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring networks are active...
	I0316 00:16:34.676038  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network default is active
	I0316 00:16:34.676427  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network mk-default-k8s-diff-port-313436 is active
	I0316 00:16:34.676855  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Getting domain xml...
	I0316 00:16:34.677501  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Creating domain...
	I0316 00:16:33.397686  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398274  123537 main.go:141] libmachine: (embed-certs-666637) Found IP for machine: 192.168.61.91
	I0316 00:16:33.398301  123537 main.go:141] libmachine: (embed-certs-666637) Reserving static IP address...
	I0316 00:16:33.398319  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has current primary IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398829  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.398859  123537 main.go:141] libmachine: (embed-certs-666637) DBG | skip adding static IP to network mk-embed-certs-666637 - found existing host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"}
	I0316 00:16:33.398883  123537 main.go:141] libmachine: (embed-certs-666637) Reserved static IP address: 192.168.61.91
	I0316 00:16:33.398896  123537 main.go:141] libmachine: (embed-certs-666637) Waiting for SSH to be available...
	I0316 00:16:33.398905  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Getting to WaitForSSH function...
	I0316 00:16:33.401376  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.401835  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.401872  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.402054  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH client type: external
	I0316 00:16:33.402082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa (-rw-------)
	I0316 00:16:33.402113  123537 main.go:141] libmachine: (embed-certs-666637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:33.402141  123537 main.go:141] libmachine: (embed-certs-666637) DBG | About to run SSH command:
	I0316 00:16:33.402188  123537 main.go:141] libmachine: (embed-certs-666637) DBG | exit 0
	I0316 00:16:33.523353  123537 main.go:141] libmachine: (embed-certs-666637) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:33.523747  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetConfigRaw
	I0316 00:16:33.524393  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.526639  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527046  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.527080  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527278  123537 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:16:33.527509  123537 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:33.527527  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:33.527766  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.529906  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.530210  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530341  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.530596  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530816  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530953  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.531119  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.531334  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.531348  123537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:33.635573  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:33.635601  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.635879  123537 buildroot.go:166] provisioning hostname "embed-certs-666637"
	I0316 00:16:33.635905  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.636109  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.638998  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639369  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.639417  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639629  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.639795  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.639971  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.640103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.640366  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.640524  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.640543  123537 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-666637 && echo "embed-certs-666637" | sudo tee /etc/hostname
	I0316 00:16:33.757019  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-666637
	
	I0316 00:16:33.757049  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.759808  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760120  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.760154  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760375  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.760583  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760723  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760829  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.760951  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.761121  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.761144  123537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-666637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-666637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-666637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:33.873548  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:33.873587  123537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:33.873642  123537 buildroot.go:174] setting up certificates
	I0316 00:16:33.873654  123537 provision.go:84] configureAuth start
	I0316 00:16:33.873666  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.873986  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.876609  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.876976  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.877004  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.877194  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.879624  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880156  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.880185  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880300  123537 provision.go:143] copyHostCerts
	I0316 00:16:33.880359  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:33.880370  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:33.880441  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:33.880526  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:33.880534  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:33.880558  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:33.880625  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:33.880632  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:33.880653  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:33.880707  123537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.embed-certs-666637 san=[127.0.0.1 192.168.61.91 embed-certs-666637 localhost minikube]
	I0316 00:16:33.984403  123537 provision.go:177] copyRemoteCerts
	I0316 00:16:33.984471  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:33.984499  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.987297  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987711  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.987741  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987894  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.988108  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.988284  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.988456  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.069540  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:34.094494  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 00:16:34.119198  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:34.144669  123537 provision.go:87] duration metric: took 271.000471ms to configureAuth
	I0316 00:16:34.144701  123537 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:34.144891  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:34.144989  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.148055  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148464  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.148496  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148710  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.148918  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149097  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149251  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.149416  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.149580  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.149596  123537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:34.414026  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:34.414058  123537 machine.go:97] duration metric: took 886.536134ms to provisionDockerMachine
	I0316 00:16:34.414070  123537 start.go:293] postStartSetup for "embed-certs-666637" (driver="kvm2")
	I0316 00:16:34.414081  123537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:34.414101  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.414464  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:34.414497  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.417211  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417482  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.417520  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417617  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.417804  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.417990  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.418126  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.498223  123537 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:34.502954  123537 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:34.502989  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:34.503068  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:34.503156  123537 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:34.503258  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:34.513065  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:34.537606  123537 start.go:296] duration metric: took 123.521431ms for postStartSetup
	I0316 00:16:34.537657  123537 fix.go:56] duration metric: took 18.450434099s for fixHost
	I0316 00:16:34.537679  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.540574  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.540908  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.540950  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.541086  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.541302  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541471  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541609  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.541803  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.542009  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.542025  123537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:34.648381  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548194.613058580
	
	I0316 00:16:34.648419  123537 fix.go:216] guest clock: 1710548194.613058580
	I0316 00:16:34.648427  123537 fix.go:229] Guest: 2024-03-16 00:16:34.61305858 +0000 UTC Remote: 2024-03-16 00:16:34.537661993 +0000 UTC m=+286.854063579 (delta=75.396587ms)
	I0316 00:16:34.648454  123537 fix.go:200] guest clock delta is within tolerance: 75.396587ms
	I0316 00:16:34.648459  123537 start.go:83] releasing machines lock for "embed-certs-666637", held for 18.561300744s
	I0316 00:16:34.648483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.648770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:34.651350  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651748  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.651794  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651926  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652573  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652810  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652907  123537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:34.652965  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.653064  123537 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:34.653090  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.655796  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656121  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656149  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656170  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656281  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656461  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.656562  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656586  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656640  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.656739  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656807  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.656883  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.657023  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.657249  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.759596  123537 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:34.765571  123537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:34.915897  123537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:34.923372  123537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:34.923471  123537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:34.940579  123537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:34.940613  123537 start.go:494] detecting cgroup driver to use...
	I0316 00:16:34.940699  123537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:34.957640  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:34.971525  123537 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:34.971598  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:34.987985  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:35.001952  123537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:35.124357  123537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:35.273948  123537 docker.go:233] disabling docker service ...
	I0316 00:16:35.274037  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:35.291073  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:35.311209  123537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:35.460630  123537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:35.581263  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:35.596460  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:35.617992  123537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:35.618042  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.628372  123537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:35.628426  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.639487  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.650397  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.662065  123537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:35.676003  123537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:35.686159  123537 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:35.686241  123537 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:35.699814  123537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:35.710182  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:35.831831  123537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:35.977556  123537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:35.977638  123537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:35.982729  123537 start.go:562] Will wait 60s for crictl version
	I0316 00:16:35.982806  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:16:35.986695  123537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:36.023299  123537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:36.023412  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.055441  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.090313  123537 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:36.091622  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:36.094687  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095062  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:36.095098  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095277  123537 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:36.099781  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:36.113522  123537 kubeadm.go:877] updating cluster {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:36.113674  123537 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:36.113743  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:36.152208  123537 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:36.152300  123537 ssh_runner.go:195] Run: which lz4
	I0316 00:16:36.156802  123537 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:36.161430  123537 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:36.161472  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:35.911510  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting to get IP...
	I0316 00:16:35.912562  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.912986  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.913064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:35.912955  124655 retry.go:31] will retry after 248.147893ms: waiting for machine to come up
	I0316 00:16:36.162476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163094  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163127  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.163032  124655 retry.go:31] will retry after 387.219214ms: waiting for machine to come up
	I0316 00:16:36.551678  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552203  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552236  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.552178  124655 retry.go:31] will retry after 391.385671ms: waiting for machine to come up
	I0316 00:16:36.945741  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946275  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.946216  124655 retry.go:31] will retry after 470.449619ms: waiting for machine to come up
	I0316 00:16:37.417836  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418324  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418353  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.418259  124655 retry.go:31] will retry after 508.962644ms: waiting for machine to come up
	I0316 00:16:37.929194  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929710  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.929671  124655 retry.go:31] will retry after 877.538639ms: waiting for machine to come up
	I0316 00:16:38.808551  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809061  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809100  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:38.809002  124655 retry.go:31] will retry after 754.319242ms: waiting for machine to come up
	I0316 00:16:39.565060  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565475  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565512  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:39.565411  124655 retry.go:31] will retry after 1.472475348s: waiting for machine to come up
	I0316 00:16:37.946470  123537 crio.go:444] duration metric: took 1.789700065s to copy over tarball
	I0316 00:16:37.946552  123537 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:40.497841  123537 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551257887s)
	I0316 00:16:40.497867  123537 crio.go:451] duration metric: took 2.551367803s to extract the tarball
	I0316 00:16:40.497875  123537 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:40.539695  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:40.588945  123537 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:40.588974  123537 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:40.588983  123537 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.28.4 crio true true} ...
	I0316 00:16:40.589125  123537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-666637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:40.589216  123537 ssh_runner.go:195] Run: crio config
	I0316 00:16:40.641673  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:40.641702  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:40.641719  123537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:40.641754  123537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-666637 NodeName:embed-certs-666637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:40.641939  123537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-666637"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:40.642024  123537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:40.652461  123537 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:40.652539  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:40.662114  123537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 00:16:40.679782  123537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:40.701982  123537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0316 00:16:40.720088  123537 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:40.724199  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:40.737133  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:40.860343  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:40.878437  123537 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637 for IP: 192.168.61.91
	I0316 00:16:40.878466  123537 certs.go:194] generating shared ca certs ...
	I0316 00:16:40.878489  123537 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:40.878690  123537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:40.878766  123537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:40.878779  123537 certs.go:256] generating profile certs ...
	I0316 00:16:40.878888  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/client.key
	I0316 00:16:40.878990  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key.07955952
	I0316 00:16:40.879059  123537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key
	I0316 00:16:40.879178  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:40.879225  123537 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:40.879239  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:40.879271  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:40.879302  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:40.879352  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:40.879409  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:40.880141  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:40.924047  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:40.962441  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:41.000283  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:41.034353  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 00:16:41.069315  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:16:41.100325  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:16:41.129285  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:16:41.155899  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:16:41.180657  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:16:41.205961  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:16:41.231886  123537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:16:41.249785  123537 ssh_runner.go:195] Run: openssl version
	I0316 00:16:41.255703  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:16:41.266968  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271536  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271595  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.277460  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:16:41.288854  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:16:41.300302  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305189  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305256  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.311200  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:16:41.322784  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:16:41.334879  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339774  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339837  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.345746  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:16:41.357661  123537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:16:41.362469  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:16:41.368875  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:16:41.375759  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:16:41.382518  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:16:41.388629  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:16:41.394882  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:16:41.401114  123537 kubeadm.go:391] StartCluster: {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:16:41.401243  123537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:16:41.401304  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.449499  123537 cri.go:89] found id: ""
	I0316 00:16:41.449590  123537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:16:41.461139  123537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:16:41.461165  123537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:16:41.461173  123537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:16:41.461243  123537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:16:41.473648  123537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:16:41.474652  123537 kubeconfig.go:125] found "embed-certs-666637" server: "https://192.168.61.91:8443"
	I0316 00:16:41.476724  123537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:16:41.488387  123537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0316 00:16:41.488426  123537 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:16:41.488439  123537 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:16:41.488485  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.526197  123537 cri.go:89] found id: ""
	I0316 00:16:41.526283  123537 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:16:41.545489  123537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:16:41.555977  123537 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:16:41.555998  123537 kubeadm.go:156] found existing configuration files:
	
	I0316 00:16:41.556048  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:16:41.565806  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:16:41.565891  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:16:41.575646  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:16:41.585269  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:16:41.585329  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:16:41.595336  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.605081  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:16:41.605144  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.615182  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:16:41.624781  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:16:41.624837  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:16:41.634852  123537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:16:41.644749  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.748782  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.477775  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.688730  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.039441  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039924  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:41.039885  124655 retry.go:31] will retry after 1.408692905s: waiting for machine to come up
	I0316 00:16:42.449971  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450402  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:42.450355  124655 retry.go:31] will retry after 1.539639877s: waiting for machine to come up
	I0316 00:16:43.992314  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992833  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992869  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:43.992777  124655 retry.go:31] will retry after 2.297369864s: waiting for machine to come up
	I0316 00:16:42.777223  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.944089  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:16:42.944193  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.445082  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.945117  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.963812  123537 api_server.go:72] duration metric: took 1.019723734s to wait for apiserver process to appear ...
	I0316 00:16:43.963845  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:16:43.963871  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.924208  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.924258  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.924278  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.953212  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.953245  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.964449  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.988201  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.988232  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:47.464502  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.469385  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.469421  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:47.964483  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.970448  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.970492  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:48.463984  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:48.468908  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:16:48.476120  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:16:48.476153  123537 api_server.go:131] duration metric: took 4.512298176s to wait for apiserver health ...
	I0316 00:16:48.476164  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:48.476172  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:48.478076  123537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:16:48.479565  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:16:48.490129  123537 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:16:48.516263  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:16:48.532732  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:16:48.532768  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:16:48.532778  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:16:48.532788  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:16:48.532795  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:16:48.532801  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:16:48.532808  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:16:48.532815  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:16:48.532822  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:16:48.532833  123537 system_pods.go:74] duration metric: took 16.547677ms to wait for pod list to return data ...
	I0316 00:16:48.532845  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:16:48.535945  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:16:48.535989  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:16:48.536006  123537 node_conditions.go:105] duration metric: took 3.154184ms to run NodePressure ...
	I0316 00:16:48.536027  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:48.733537  123537 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739166  123537 kubeadm.go:733] kubelet initialised
	I0316 00:16:48.739196  123537 kubeadm.go:734] duration metric: took 5.63118ms waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739209  123537 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:48.744724  123537 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.750261  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750299  123537 pod_ready.go:81] duration metric: took 5.547917ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.750310  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750323  123537 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.755340  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755362  123537 pod_ready.go:81] duration metric: took 5.029639ms for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.755371  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755379  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.761104  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761128  123537 pod_ready.go:81] duration metric: took 5.740133ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.761138  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761146  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.921215  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921244  123537 pod_ready.go:81] duration metric: took 160.08501ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.921254  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921260  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.319922  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319954  123537 pod_ready.go:81] duration metric: took 398.685799ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.319963  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319969  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.720866  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720922  123537 pod_ready.go:81] duration metric: took 400.944023ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.720948  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720967  123537 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:50.120836  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120865  123537 pod_ready.go:81] duration metric: took 399.883676ms for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:50.120875  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120882  123537 pod_ready.go:38] duration metric: took 1.381661602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:50.120923  123537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:16:50.133619  123537 ops.go:34] apiserver oom_adj: -16
	I0316 00:16:50.133653  123537 kubeadm.go:591] duration metric: took 8.672472438s to restartPrimaryControlPlane
	I0316 00:16:50.133663  123537 kubeadm.go:393] duration metric: took 8.732557685s to StartCluster
	I0316 00:16:50.133684  123537 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.133760  123537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:16:50.135355  123537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.135613  123537 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:16:50.140637  123537 out.go:177] * Verifying Kubernetes components...
	I0316 00:16:50.135727  123537 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:16:50.135843  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:50.142015  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:50.142027  123537 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-666637"
	I0316 00:16:50.142050  123537 addons.go:69] Setting default-storageclass=true in profile "embed-certs-666637"
	I0316 00:16:50.142070  123537 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-666637"
	W0316 00:16:50.142079  123537 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:16:50.142090  123537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-666637"
	I0316 00:16:50.142092  123537 addons.go:69] Setting metrics-server=true in profile "embed-certs-666637"
	I0316 00:16:50.142121  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142124  123537 addons.go:234] Setting addon metrics-server=true in "embed-certs-666637"
	W0316 00:16:50.142136  123537 addons.go:243] addon metrics-server should already be in state true
	I0316 00:16:50.142168  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142439  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142468  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142558  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142577  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.156773  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0316 00:16:50.156804  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0316 00:16:50.157267  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157268  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157591  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0316 00:16:50.157835  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157841  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157857  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157858  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157925  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.158223  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158226  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158404  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.158419  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.158731  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158753  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158795  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158828  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158932  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.159126  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.162347  123537 addons.go:234] Setting addon default-storageclass=true in "embed-certs-666637"
	W0316 00:16:50.162365  123537 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:16:50.162392  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.162612  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.162649  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.172299  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0316 00:16:50.172676  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.173173  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.173193  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.173547  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.173770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.175668  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.177676  123537 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:16:50.175968  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0316 00:16:50.176110  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0316 00:16:50.179172  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:16:50.179189  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:16:50.179206  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.179453  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179538  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179888  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.179909  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180021  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.180037  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180266  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180385  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180613  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.180788  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.180811  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.185060  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.192504  123537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:16:46.292804  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293326  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293363  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:46.293267  124655 retry.go:31] will retry after 2.301997121s: waiting for machine to come up
	I0316 00:16:48.596337  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596777  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:48.596731  124655 retry.go:31] will retry after 3.159447069s: waiting for machine to come up
	I0316 00:16:50.186146  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.186717  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.193945  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.193971  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.194051  123537 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.194079  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:16:50.194100  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.194103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.194264  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.194420  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.196511  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0316 00:16:50.197160  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.197580  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.197598  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.197658  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198007  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.198039  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.198038  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198235  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.198237  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.198435  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.198612  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.198772  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.200270  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.200540  123537 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.200554  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:16:50.200566  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.203147  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203634  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.203655  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203765  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.203966  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.204201  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.204335  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.317046  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:50.340203  123537 node_ready.go:35] waiting up to 6m0s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:50.415453  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.423732  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.424648  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:16:50.424663  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:16:50.470134  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:16:50.470164  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:16:50.518806  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:50.518833  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:16:50.570454  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:51.627153  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203388401s)
	I0316 00:16:51.627211  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627222  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627419  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211925303s)
	I0316 00:16:51.627468  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627533  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627595  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627609  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627620  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627549  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627859  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627885  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627895  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627914  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627956  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627976  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.629345  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.633811  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.633831  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.634043  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.634081  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726400  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.15588774s)
	I0316 00:16:51.726458  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726472  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.726820  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.726853  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.726875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726889  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726898  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.727178  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.727193  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.727206  123537 addons.go:470] Verifying addon metrics-server=true in "embed-certs-666637"
	I0316 00:16:51.729277  123537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0316 00:16:51.730645  123537 addons.go:505] duration metric: took 1.594919212s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0316 00:16:52.344107  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:51.757133  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757570  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Found IP for machine: 192.168.72.198
	I0316 00:16:51.757603  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has current primary IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserving static IP address...
	I0316 00:16:51.758067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.758093  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | skip adding static IP to network mk-default-k8s-diff-port-313436 - found existing host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"}
	I0316 00:16:51.758110  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserved static IP address: 192.168.72.198
	I0316 00:16:51.758120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Getting to WaitForSSH function...
	I0316 00:16:51.758138  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for SSH to be available...
	I0316 00:16:51.760276  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760596  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.760632  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760711  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH client type: external
	I0316 00:16:51.760744  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa (-rw-------)
	I0316 00:16:51.760797  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:51.760820  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | About to run SSH command:
	I0316 00:16:51.760861  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | exit 0
	I0316 00:16:51.887432  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:51.887829  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetConfigRaw
	I0316 00:16:51.888471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:51.891514  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.891923  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.891949  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.892232  123819 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:16:51.892502  123819 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:51.892527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:51.892782  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:51.895025  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.895367  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:51.895683  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895841  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:51.896178  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:51.896361  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:51.896372  123819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:52.012107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:52.012154  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012405  123819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-313436"
	I0316 00:16:52.012434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012640  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.015307  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.015823  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.015847  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.016055  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.016266  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016433  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016565  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.016758  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.016976  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.016992  123819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313436 && echo "default-k8s-diff-port-313436" | sudo tee /etc/hostname
	I0316 00:16:52.149152  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313436
	
	I0316 00:16:52.149180  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.152472  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.152852  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.152896  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.153056  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.153239  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153412  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.153837  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.154077  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.154108  123819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:52.285258  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:52.285290  123819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:52.285313  123819 buildroot.go:174] setting up certificates
	I0316 00:16:52.285323  123819 provision.go:84] configureAuth start
	I0316 00:16:52.285331  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.285631  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:52.288214  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288494  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.288527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288699  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.290965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291354  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.291380  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291571  123819 provision.go:143] copyHostCerts
	I0316 00:16:52.291644  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:52.291658  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:52.291719  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:52.291827  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:52.291839  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:52.291868  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:52.291966  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:52.291978  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:52.292005  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:52.292095  123819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313436 san=[127.0.0.1 192.168.72.198 default-k8s-diff-port-313436 localhost minikube]
	I0316 00:16:52.536692  123819 provision.go:177] copyRemoteCerts
	I0316 00:16:52.536756  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:52.536790  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.539525  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.539805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.539837  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.540067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.540264  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.540424  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.540599  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:52.629139  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:52.655092  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0316 00:16:52.681372  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:52.706496  123819 provision.go:87] duration metric: took 421.160351ms to configureAuth
	I0316 00:16:52.706529  123819 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:52.706737  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:52.706828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.709743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710173  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.710198  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710403  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.710616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710822  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710983  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.711148  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.711359  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.711380  123819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:53.005107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:53.005138  123819 machine.go:97] duration metric: took 1.112619102s to provisionDockerMachine
	I0316 00:16:53.005153  123819 start.go:293] postStartSetup for "default-k8s-diff-port-313436" (driver="kvm2")
	I0316 00:16:53.005166  123819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:53.005185  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.005547  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:53.005581  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.008749  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009170  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.009196  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009416  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.009617  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.009795  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.009973  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.100468  123819 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:53.105158  123819 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:53.105181  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:53.105243  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:53.105314  123819 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:53.105399  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:53.116078  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:53.142400  123819 start.go:296] duration metric: took 137.231635ms for postStartSetup
	I0316 00:16:53.142454  123819 fix.go:56] duration metric: took 18.493815855s for fixHost
	I0316 00:16:53.142483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.145282  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145658  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.145688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145878  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.146104  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146288  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146445  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.146625  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:53.146820  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:53.146834  123819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:53.260232  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548213.237261690
	
	I0316 00:16:53.260255  123819 fix.go:216] guest clock: 1710548213.237261690
	I0316 00:16:53.260262  123819 fix.go:229] Guest: 2024-03-16 00:16:53.23726169 +0000 UTC Remote: 2024-03-16 00:16:53.142460792 +0000 UTC m=+262.706636561 (delta=94.800898ms)
	I0316 00:16:53.260292  123819 fix.go:200] guest clock delta is within tolerance: 94.800898ms
	I0316 00:16:53.260298  123819 start.go:83] releasing machines lock for "default-k8s-diff-port-313436", held for 18.611697781s
	I0316 00:16:53.260323  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.260629  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:53.263641  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264002  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.264032  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.264889  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265217  123819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:53.265273  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.265404  123819 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:53.265434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.268274  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268538  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268684  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268727  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.268969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268995  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.269113  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269206  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.269298  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269419  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.269476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269572  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.372247  123819 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:53.378643  123819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:53.527036  123819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:53.534220  123819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:53.534312  123819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:53.554856  123819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:53.554900  123819 start.go:494] detecting cgroup driver to use...
	I0316 00:16:53.554971  123819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:53.580723  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:53.599919  123819 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:53.599996  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:53.613989  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:53.628748  123819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:53.745409  123819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:53.906668  123819 docker.go:233] disabling docker service ...
	I0316 00:16:53.906733  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:53.928452  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:53.949195  123819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:54.118868  123819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:54.250006  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:54.264754  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:54.285825  123819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:54.285890  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.298522  123819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:54.298590  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.311118  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.323928  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.336128  123819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:54.348715  123819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:54.359657  123819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:54.359718  123819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:54.376411  123819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:54.388136  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:54.530444  123819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:54.681895  123819 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:54.681984  123819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:54.687334  123819 start.go:562] Will wait 60s for crictl version
	I0316 00:16:54.687398  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:16:54.691443  123819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:54.730408  123819 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:54.730505  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.761591  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.792351  123819 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:54.793693  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:54.797023  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797439  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:54.797471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797665  123819 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:54.802065  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:54.815168  123819 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:54.815285  123819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:54.815345  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:54.855493  123819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:54.855553  123819 ssh_runner.go:195] Run: which lz4
	I0316 00:16:54.860096  123819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:54.865644  123819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:54.865675  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:54.345117  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:56.346342  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:57.346164  123537 node_ready.go:49] node "embed-certs-666637" has status "Ready":"True"
	I0316 00:16:57.346194  123537 node_ready.go:38] duration metric: took 7.005950923s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:57.346207  123537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:57.361331  123537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377726  123537 pod_ready.go:92] pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace has status "Ready":"True"
	I0316 00:16:57.377750  123537 pod_ready.go:81] duration metric: took 16.388353ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377760  123537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:16:56.676506  123819 crio.go:444] duration metric: took 1.816442841s to copy over tarball
	I0316 00:16:56.676609  123819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:59.338617  123819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661966532s)
	I0316 00:16:59.338655  123819 crio.go:451] duration metric: took 2.662115388s to extract the tarball
	I0316 00:16:59.338665  123819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:59.387693  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:59.453534  123819 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:59.453565  123819 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:59.453575  123819 kubeadm.go:928] updating node { 192.168.72.198 8444 v1.28.4 crio true true} ...
	I0316 00:16:59.453744  123819 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-313436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:59.453841  123819 ssh_runner.go:195] Run: crio config
	I0316 00:16:59.518492  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:16:59.518525  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:59.518543  123819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:59.518572  123819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.198 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313436 NodeName:default-k8s-diff-port-313436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:59.518791  123819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.198
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313436"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:59.518876  123819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:59.529778  123819 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:59.529860  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:59.542186  123819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0316 00:16:59.563037  123819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:59.585167  123819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 00:16:59.607744  123819 ssh_runner.go:195] Run: grep 192.168.72.198	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:59.612687  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:59.628607  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:59.767487  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:59.786494  123819 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436 for IP: 192.168.72.198
	I0316 00:16:59.786520  123819 certs.go:194] generating shared ca certs ...
	I0316 00:16:59.786545  123819 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:59.786688  123819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:59.786722  123819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:59.786728  123819 certs.go:256] generating profile certs ...
	I0316 00:16:59.786827  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.key
	I0316 00:16:59.786975  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key.254d5830
	I0316 00:16:59.787049  123819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key
	I0316 00:16:59.787204  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:59.787248  123819 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:59.787262  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:59.787295  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:59.787351  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:59.787386  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:59.787449  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:59.788288  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:59.824257  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:59.859470  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:59.904672  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:59.931832  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0316 00:16:59.965654  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:00.006949  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:00.039120  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:00.071341  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:00.095585  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:00.122165  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:00.149982  123819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:00.170019  123819 ssh_runner.go:195] Run: openssl version
	I0316 00:17:00.176232  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:00.188738  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193708  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193780  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.200433  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:00.215116  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:00.228871  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234074  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234141  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.240553  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:00.252454  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:00.264690  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269493  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269573  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.275584  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:00.287859  123819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:00.292474  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:00.298744  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:00.304793  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:00.311156  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:00.317777  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:00.324148  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:00.330667  123819 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:00.330763  123819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:00.330813  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.374868  123819 cri.go:89] found id: ""
	I0316 00:17:00.374961  123819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:00.386218  123819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:00.386240  123819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:00.386245  123819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:00.386288  123819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:00.397129  123819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:00.398217  123819 kubeconfig.go:125] found "default-k8s-diff-port-313436" server: "https://192.168.72.198:8444"
	I0316 00:17:00.400506  123819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:00.411430  123819 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.198
	I0316 00:17:00.411462  123819 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:00.411477  123819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:00.411528  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.448545  123819 cri.go:89] found id: ""
	I0316 00:17:00.448619  123819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:00.469230  123819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:00.480622  123819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:00.480644  123819 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:00.480695  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0316 00:16:59.384420  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.094272  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.390117  123537 pod_ready.go:92] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.390145  123537 pod_ready.go:81] duration metric: took 5.012377671s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.390156  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398207  123537 pod_ready.go:92] pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.398236  123537 pod_ready.go:81] duration metric: took 8.071855ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398248  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405415  123537 pod_ready.go:92] pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.405443  123537 pod_ready.go:81] duration metric: took 7.186495ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405453  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412646  123537 pod_ready.go:92] pod "kube-proxy-8fpc5" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.412665  123537 pod_ready.go:81] duration metric: took 7.204465ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412673  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606336  123537 pod_ready.go:92] pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.606369  123537 pod_ready.go:81] duration metric: took 193.687951ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606384  123537 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:00.492088  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:00.743504  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:00.756322  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0316 00:17:00.766476  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:00.766545  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:00.776849  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.786610  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:00.786676  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.797455  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0316 00:17:00.808026  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:00.808083  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:00.819306  123819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:00.834822  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:00.962203  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.535753  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.762322  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.843195  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.944855  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:01.944971  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.446047  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.945791  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.983641  123819 api_server.go:72] duration metric: took 1.038786332s to wait for apiserver process to appear ...
	I0316 00:17:02.983680  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:02.983704  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:04.615157  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:07.114447  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:06.343729  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.343763  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.343786  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.364621  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.364659  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.483852  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.491403  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.491433  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:06.983931  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.994258  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.994296  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.483821  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.506265  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:07.506301  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.983846  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.988700  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:17:07.995996  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:17:07.996021  123819 api_server.go:131] duration metric: took 5.012333318s to wait for apiserver health ...
	I0316 00:17:07.996032  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:17:07.996041  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:07.998091  123819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:17:07.999628  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:17:08.010263  123819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:17:08.041667  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:17:08.053611  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:17:08.053656  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:17:08.053668  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:17:08.053681  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:17:08.053694  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:17:08.053706  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:17:08.053717  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:17:08.053730  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:17:08.053739  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:17:08.053747  123819 system_pods.go:74] duration metric: took 12.054433ms to wait for pod list to return data ...
	I0316 00:17:08.053763  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:17:08.057781  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:17:08.057808  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:17:08.057818  123819 node_conditions.go:105] duration metric: took 4.047698ms to run NodePressure ...
	I0316 00:17:08.057837  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:08.282870  123819 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288338  123819 kubeadm.go:733] kubelet initialised
	I0316 00:17:08.288359  123819 kubeadm.go:734] duration metric: took 5.456436ms waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288367  123819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:08.294256  123819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.302762  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302802  123819 pod_ready.go:81] duration metric: took 8.523485ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.302814  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302823  123819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.309581  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309604  123819 pod_ready.go:81] duration metric: took 6.77179ms for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.309617  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309625  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.315399  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315419  123819 pod_ready.go:81] duration metric: took 5.78558ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.315428  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315434  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.445776  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445808  123819 pod_ready.go:81] duration metric: took 130.363739ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.445821  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445829  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.846181  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846228  123819 pod_ready.go:81] duration metric: took 400.382095ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.846243  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846251  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.245568  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245599  123819 pod_ready.go:81] duration metric: took 399.329058ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.245612  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245618  123819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.646855  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646888  123819 pod_ready.go:81] duration metric: took 401.262603ms for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.646901  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646909  123819 pod_ready.go:38] duration metric: took 1.358531936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:09.646926  123819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:17:09.659033  123819 ops.go:34] apiserver oom_adj: -16
	I0316 00:17:09.659059  123819 kubeadm.go:591] duration metric: took 9.272806311s to restartPrimaryControlPlane
	I0316 00:17:09.659070  123819 kubeadm.go:393] duration metric: took 9.328414192s to StartCluster
	I0316 00:17:09.659091  123819 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.659166  123819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:09.661439  123819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.661729  123819 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:17:09.663462  123819 out.go:177] * Verifying Kubernetes components...
	I0316 00:17:09.661800  123819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:17:09.661986  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:17:09.664841  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:09.664874  123819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664839  123819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664964  123819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.664980  123819 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:17:09.664847  123819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.665023  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.665037  123819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.665053  123819 addons.go:243] addon metrics-server should already be in state true
	I0316 00:17:09.665084  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.664922  123819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313436"
	I0316 00:17:09.665349  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665377  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665445  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665474  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665607  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665637  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.680337  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0316 00:17:09.680351  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0316 00:17:09.680799  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.680939  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.681331  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681366  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681541  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681560  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681736  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.681974  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.682359  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682407  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.682461  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682494  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.683660  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0316 00:17:09.684088  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.684575  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.684600  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.684992  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.685218  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.688973  123819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.688994  123819 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:17:09.689028  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.689372  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.689397  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.698126  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0316 00:17:09.698527  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.699052  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.699079  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.699407  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.699606  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.700389  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0316 00:17:09.700824  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.701308  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.701327  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.701610  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.701681  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.704168  123819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:17:09.701891  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.704403  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0316 00:17:09.706042  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:17:09.706076  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:17:09.706102  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.706988  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.707805  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.707831  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.708465  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.708556  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.709451  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.709500  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.709520  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.711354  123819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:09.709911  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.710103  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.712849  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.712865  123819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:09.712886  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:17:09.712910  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.713010  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.713202  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.713365  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.715688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716029  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.716064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716260  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.716437  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.716662  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.716826  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.725309  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0316 00:17:09.725659  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.726175  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.726191  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.726492  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.726665  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.728459  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.728721  123819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.728739  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:17:09.728753  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.732122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732546  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.732576  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732733  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.732908  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.733064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.733206  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.838182  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:09.857248  123819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:09.956751  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:17:09.956775  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:17:09.982142  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.992293  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:17:09.992319  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:17:10.000878  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:10.035138  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:10.035171  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:17:10.066721  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:11.153759  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171576504s)
	I0316 00:17:11.153815  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.153828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154237  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154241  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154262  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.154271  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.154281  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154569  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154601  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154609  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165531  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.165579  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.165868  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.165922  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165879  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536530  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.469764101s)
	I0316 00:17:11.536596  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536607  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536648  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53572281s)
	I0316 00:17:11.536694  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536713  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536963  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536988  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536995  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537001  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537005  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537010  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537013  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537019  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537218  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537365  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537376  123819 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-313436"
	I0316 00:17:11.537404  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537425  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.539481  123819 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0316 00:17:09.114699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:11.613507  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:13.204814  123454 start.go:364] duration metric: took 52.116735477s to acquireMachinesLock for "no-preload-238598"
	I0316 00:17:13.204888  123454 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:17:13.204900  123454 fix.go:54] fixHost starting: 
	I0316 00:17:13.205405  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:13.205446  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:13.222911  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0316 00:17:13.223326  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:13.223784  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:17:13.223811  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:13.224153  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:13.224338  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:13.224507  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:17:13.226028  123454 fix.go:112] recreateIfNeeded on no-preload-238598: state=Stopped err=<nil>
	I0316 00:17:13.226051  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	W0316 00:17:13.226232  123454 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:17:13.227865  123454 out.go:177] * Restarting existing kvm2 VM for "no-preload-238598" ...
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:11.540876  123819 addons.go:505] duration metric: took 1.87908534s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0316 00:17:11.862772  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.866333  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.229181  123454 main.go:141] libmachine: (no-preload-238598) Calling .Start
	I0316 00:17:13.229409  123454 main.go:141] libmachine: (no-preload-238598) Ensuring networks are active...
	I0316 00:17:13.230257  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network default is active
	I0316 00:17:13.230618  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network mk-no-preload-238598 is active
	I0316 00:17:13.231135  123454 main.go:141] libmachine: (no-preload-238598) Getting domain xml...
	I0316 00:17:13.232023  123454 main.go:141] libmachine: (no-preload-238598) Creating domain...
	I0316 00:17:14.513800  123454 main.go:141] libmachine: (no-preload-238598) Waiting to get IP...
	I0316 00:17:14.514838  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.515446  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.515520  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.515407  125029 retry.go:31] will retry after 275.965955ms: waiting for machine to come up
	I0316 00:17:14.793095  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.793594  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.793721  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.793667  125029 retry.go:31] will retry after 347.621979ms: waiting for machine to come up
	I0316 00:17:15.143230  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.143869  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.143909  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.143820  125029 retry.go:31] will retry after 301.441766ms: waiting for machine to come up
	I0316 00:17:15.446476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.446917  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.446964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.446865  125029 retry.go:31] will retry after 431.207345ms: waiting for machine to come up
	I0316 00:17:13.615911  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.616381  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:17.618352  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:16.362143  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:16.866488  123819 node_ready.go:49] node "default-k8s-diff-port-313436" has status "Ready":"True"
	I0316 00:17:16.866522  123819 node_ready.go:38] duration metric: took 7.00923342s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:16.866535  123819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:16.881909  123819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897574  123819 pod_ready.go:92] pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:16.897617  123819 pod_ready.go:81] duration metric: took 15.618728ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897630  123819 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:18.910740  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.879693  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.880186  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.880222  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.880148  125029 retry.go:31] will retry after 747.650888ms: waiting for machine to come up
	I0316 00:17:16.629378  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:16.631312  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:16.631352  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:16.631193  125029 retry.go:31] will retry after 670.902171ms: waiting for machine to come up
	I0316 00:17:17.304282  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:17.304704  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:17.304751  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:17.304658  125029 retry.go:31] will retry after 1.160879196s: waiting for machine to come up
	I0316 00:17:18.466662  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:18.467103  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:18.467136  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:18.467049  125029 retry.go:31] will retry after 948.597188ms: waiting for machine to come up
	I0316 00:17:19.417144  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:19.417623  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:19.417657  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:19.417561  125029 retry.go:31] will retry after 1.263395738s: waiting for machine to come up
	I0316 00:17:20.289713  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.613643  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:21.865146  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.241535  123819 pod_ready.go:92] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.241561  123819 pod_ready.go:81] duration metric: took 5.34392174s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.241573  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247469  123819 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.247501  123819 pod_ready.go:81] duration metric: took 5.919787ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247515  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756151  123819 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.756180  123819 pod_ready.go:81] duration metric: took 508.652978ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756194  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762214  123819 pod_ready.go:92] pod "kube-proxy-btmmm" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.762254  123819 pod_ready.go:81] duration metric: took 6.041426ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762268  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769644  123819 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.769668  123819 pod_ready.go:81] duration metric: took 7.391813ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769681  123819 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:24.780737  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.682443  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:20.798804  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:20.798840  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:20.682821  125029 retry.go:31] will retry after 1.834378571s: waiting for machine to come up
	I0316 00:17:22.518539  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:22.518997  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:22.519027  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:22.518945  125029 retry.go:31] will retry after 1.944866033s: waiting for machine to come up
	I0316 00:17:24.466332  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:24.466902  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:24.466930  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:24.466847  125029 retry.go:31] will retry after 3.4483736s: waiting for machine to come up
	I0316 00:17:24.615642  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.113920  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.278017  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:29.777128  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.919457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:27.919931  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:27.919964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:27.919891  125029 retry.go:31] will retry after 3.122442649s: waiting for machine to come up
	I0316 00:17:29.613500  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.613674  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.276855  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:34.277228  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.044512  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:31.044939  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:31.044970  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:31.044884  125029 retry.go:31] will retry after 4.529863895s: waiting for machine to come up
	I0316 00:17:34.112266  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:36.118023  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:35.576311  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.576834  123454 main.go:141] libmachine: (no-preload-238598) Found IP for machine: 192.168.50.137
	I0316 00:17:35.576858  123454 main.go:141] libmachine: (no-preload-238598) Reserving static IP address...
	I0316 00:17:35.576875  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has current primary IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.577312  123454 main.go:141] libmachine: (no-preload-238598) Reserved static IP address: 192.168.50.137
	I0316 00:17:35.577355  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.577365  123454 main.go:141] libmachine: (no-preload-238598) Waiting for SSH to be available...
	I0316 00:17:35.577404  123454 main.go:141] libmachine: (no-preload-238598) DBG | skip adding static IP to network mk-no-preload-238598 - found existing host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"}
	I0316 00:17:35.577419  123454 main.go:141] libmachine: (no-preload-238598) DBG | Getting to WaitForSSH function...
	I0316 00:17:35.579640  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580061  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.580108  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580210  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH client type: external
	I0316 00:17:35.580269  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa (-rw-------)
	I0316 00:17:35.580303  123454 main.go:141] libmachine: (no-preload-238598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:35.580319  123454 main.go:141] libmachine: (no-preload-238598) DBG | About to run SSH command:
	I0316 00:17:35.580339  123454 main.go:141] libmachine: (no-preload-238598) DBG | exit 0
	I0316 00:17:35.711373  123454 main.go:141] libmachine: (no-preload-238598) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:35.711791  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetConfigRaw
	I0316 00:17:35.712598  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:35.715455  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.715929  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.715954  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.716326  123454 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:17:35.716525  123454 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:35.716551  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:35.716802  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.719298  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719612  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.719644  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719780  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.720005  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720178  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720315  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.720487  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.720666  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.720677  123454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:35.835733  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:35.835760  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836004  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:17:35.836033  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836240  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.839024  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839413  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.839445  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839627  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.839811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.839977  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.840133  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.840279  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.840485  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.840504  123454 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-238598 && echo "no-preload-238598" | sudo tee /etc/hostname
	I0316 00:17:35.976590  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-238598
	
	I0316 00:17:35.976624  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.979354  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979689  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.979720  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979879  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.980104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980267  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980445  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.980602  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.980796  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.980815  123454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-238598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-238598/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-238598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:36.106710  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:36.106750  123454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:36.106774  123454 buildroot.go:174] setting up certificates
	I0316 00:17:36.106786  123454 provision.go:84] configureAuth start
	I0316 00:17:36.106800  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:36.107104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.110050  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110431  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.110476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110592  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.113019  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113366  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.113391  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113517  123454 provision.go:143] copyHostCerts
	I0316 00:17:36.113595  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:36.113619  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:36.113699  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:36.113898  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:36.113911  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:36.113964  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:36.114051  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:36.114063  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:36.114089  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:36.114155  123454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.no-preload-238598 san=[127.0.0.1 192.168.50.137 localhost minikube no-preload-238598]
	I0316 00:17:36.239622  123454 provision.go:177] copyRemoteCerts
	I0316 00:17:36.239706  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:36.239736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.242440  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.242806  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.242841  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.243086  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.243279  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.243482  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.243623  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.330601  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:36.359600  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:17:36.384258  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:36.409195  123454 provision.go:87] duration metric: took 302.39571ms to configureAuth
	I0316 00:17:36.409239  123454 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:36.409440  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:17:36.409539  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.412280  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412618  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.412652  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.413039  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413217  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413366  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.413576  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.413803  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.413823  123454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:36.703300  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:36.703365  123454 machine.go:97] duration metric: took 986.82471ms to provisionDockerMachine
	I0316 00:17:36.703418  123454 start.go:293] postStartSetup for "no-preload-238598" (driver="kvm2")
	I0316 00:17:36.703440  123454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:36.703474  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.703838  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:36.703880  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.706655  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707019  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.707057  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707237  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.707470  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.707626  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.707822  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.794605  123454 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:36.799121  123454 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:36.799151  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:36.799222  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:36.799298  123454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:36.799423  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:36.808805  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:36.834244  123454 start.go:296] duration metric: took 130.803052ms for postStartSetup
	I0316 00:17:36.834290  123454 fix.go:56] duration metric: took 23.629390369s for fixHost
	I0316 00:17:36.834318  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.837197  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837643  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.837684  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837926  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.838155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838360  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838533  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.838721  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.838965  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.838982  123454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:36.956309  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548256.900043121
	
	I0316 00:17:36.956352  123454 fix.go:216] guest clock: 1710548256.900043121
	I0316 00:17:36.956366  123454 fix.go:229] Guest: 2024-03-16 00:17:36.900043121 +0000 UTC Remote: 2024-03-16 00:17:36.83429667 +0000 UTC m=+356.318603082 (delta=65.746451ms)
	I0316 00:17:36.956398  123454 fix.go:200] guest clock delta is within tolerance: 65.746451ms
	I0316 00:17:36.956425  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 23.751563248s
	I0316 00:17:36.956472  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.956736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.960077  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960494  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.960524  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960678  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961247  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961454  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961522  123454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:36.961588  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.961730  123454 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:36.961756  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.964457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964801  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.964834  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964905  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965346  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965374  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.965406  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965518  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.965609  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965681  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.965739  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965866  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.966034  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:37.077559  123454 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:37.084485  123454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:37.229503  123454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:37.236783  123454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:37.236862  123454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:37.255248  123454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:37.255275  123454 start.go:494] detecting cgroup driver to use...
	I0316 00:17:37.255377  123454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:37.272795  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:37.289822  123454 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:37.289885  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:37.306082  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:37.322766  123454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:37.448135  123454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:37.614316  123454 docker.go:233] disabling docker service ...
	I0316 00:17:37.614381  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:37.630091  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:37.645025  123454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:37.773009  123454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:37.891459  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:37.906829  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:37.927910  123454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:17:37.927982  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.939166  123454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:37.939226  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.950487  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.961547  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.972402  123454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:37.983413  123454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:37.993080  123454 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:37.993147  123454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:38.007746  123454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:38.017917  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:38.158718  123454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:38.329423  123454 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:38.329520  123454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:38.334518  123454 start.go:562] Will wait 60s for crictl version
	I0316 00:17:38.334570  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.338570  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:38.375688  123454 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:38.375779  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.408167  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.444754  123454 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.277480  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.281375  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.446078  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:38.448885  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449299  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:38.449329  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449565  123454 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:38.453922  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:38.467515  123454 kubeadm.go:877] updating cluster {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:38.467646  123454 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:17:38.467690  123454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:38.511057  123454 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:17:38.511093  123454 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:38.511189  123454 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.511221  123454 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0316 00:17:38.511240  123454 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.511253  123454 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.511305  123454 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.511335  123454 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.511338  123454 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.511188  123454 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.512934  123454 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.512949  123454 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.512953  123454 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0316 00:17:38.513014  123454 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.648129  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.650306  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.661334  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0316 00:17:38.666656  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.669280  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.684494  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.690813  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.760339  123454 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0316 00:17:38.760396  123454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.760449  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.760545  123454 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0316 00:17:38.760585  123454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.760641  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908463  123454 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0316 00:17:38.908491  123454 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0316 00:17:38.908515  123454 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.908525  123454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908579  123454 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0316 00:17:38.908607  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.908615  123454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.908585  123454 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908638  123454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.908739  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.954587  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.954611  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.954699  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.961857  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.961878  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0316 00:17:38.961979  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:38.962005  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.962010  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:39.052859  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.052888  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0316 00:17:39.052907  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.052958  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.052976  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.053001  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0316 00:17:39.052963  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.053055  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.053060  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0316 00:17:39.053100  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:39.053156  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.053235  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.120914  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.612614  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.779012  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:43.278631  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:41.133735  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.080597621s)
	I0316 00:17:41.133778  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0316 00:17:41.133890  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.080807025s)
	I0316 00:17:41.133924  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0316 00:17:41.133942  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.08085981s)
	I0316 00:17:41.133972  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133978  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.080988823s)
	I0316 00:17:41.133993  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133948  123454 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134011  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.080758975s)
	I0316 00:17:41.134031  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0316 00:17:41.134032  123454 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.01309054s)
	I0316 00:17:41.134060  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134083  123454 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0316 00:17:41.134110  123454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:41.134160  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:43.198894  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.064808781s)
	I0316 00:17:43.198926  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0316 00:17:43.198952  123454 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.198951  123454 ssh_runner.go:235] Completed: which crictl: (2.064761171s)
	I0316 00:17:43.199004  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.199051  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:43.112939  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.114446  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.613592  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.776235  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.777686  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.278307  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.110501  123454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.911421102s)
	I0316 00:17:47.110567  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0316 00:17:47.110695  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.911660704s)
	I0316 00:17:47.110728  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0316 00:17:47.110751  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:47.110703  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:47.110802  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:49.585079  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.474253503s)
	I0316 00:17:49.585109  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0316 00:17:49.585130  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.474308112s)
	I0316 00:17:49.585160  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0316 00:17:49.585134  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.585220  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.613704  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.615227  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:54.780467  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.736360  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.151102687s)
	I0316 00:17:51.736402  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0316 00:17:51.736463  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:51.736535  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:54.214591  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477993231s)
	I0316 00:17:54.214629  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0316 00:17:54.214658  123454 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:54.214728  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:55.171123  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0316 00:17:55.171204  123454 cache_images.go:123] Successfully loaded all cached images
	I0316 00:17:55.171213  123454 cache_images.go:92] duration metric: took 16.660103091s to LoadCachedImages
	I0316 00:17:55.171233  123454 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:17:55.171506  123454 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-238598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:55.171617  123454 ssh_runner.go:195] Run: crio config
	I0316 00:17:55.225056  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:17:55.225078  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:55.225089  123454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:55.225110  123454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-238598 NodeName:no-preload-238598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:17:55.225278  123454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-238598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:55.225371  123454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:17:55.237834  123454 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:55.237896  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:55.248733  123454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 00:17:55.266587  123454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:17:55.285283  123454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0316 00:17:55.303384  123454 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:55.307384  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:55.321079  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:55.453112  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:55.470573  123454 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598 for IP: 192.168.50.137
	I0316 00:17:55.470600  123454 certs.go:194] generating shared ca certs ...
	I0316 00:17:55.470623  123454 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:55.470808  123454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:55.470868  123454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:55.470906  123454 certs.go:256] generating profile certs ...
	I0316 00:17:55.471028  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.key
	I0316 00:17:55.471140  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key.0f2ae39d
	I0316 00:17:55.471195  123454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key
	I0316 00:17:55.471410  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:55.471463  123454 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:55.471483  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:55.471515  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:55.471542  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:55.471568  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:55.471612  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:55.472267  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:55.517524  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:54.115678  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:56.613196  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.277553  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:59.277770  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.567992  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:55.601463  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:55.637956  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:17:55.670063  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:55.694990  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:55.718916  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:17:55.744124  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:55.770051  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:55.794846  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:55.819060  123454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:55.836991  123454 ssh_runner.go:195] Run: openssl version
	I0316 00:17:55.844665  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:55.857643  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862493  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862561  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.868430  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:55.880551  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:55.891953  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896627  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896687  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.902539  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:55.915215  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:55.926699  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931120  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931172  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.936791  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:55.948180  123454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:55.953021  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:55.959107  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:55.965018  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:55.971159  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:55.977069  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:55.983062  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:55.989119  123454 kubeadm.go:391] StartCluster: {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:55.989201  123454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:55.989254  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.029128  123454 cri.go:89] found id: ""
	I0316 00:17:56.029209  123454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:56.040502  123454 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:56.040525  123454 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:56.040531  123454 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:56.040577  123454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:56.051843  123454 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:56.052995  123454 kubeconfig.go:125] found "no-preload-238598" server: "https://192.168.50.137:8443"
	I0316 00:17:56.055273  123454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:56.066493  123454 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0316 00:17:56.066547  123454 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:56.066564  123454 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:56.066641  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.111015  123454 cri.go:89] found id: ""
	I0316 00:17:56.111110  123454 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:56.131392  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:56.142638  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:56.142665  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:56.142725  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:56.154318  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:56.154418  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:56.166011  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:56.176688  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:56.176752  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:56.187776  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.198216  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:56.198285  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.208661  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:56.218587  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:56.218655  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:56.230247  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:56.241302  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:56.361423  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.731067  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.369591288s)
	I0316 00:17:57.731101  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.952457  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.044540  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.179796  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:58.179894  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.680635  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.180617  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.205383  123454 api_server.go:72] duration metric: took 1.025590775s to wait for apiserver process to appear ...
	I0316 00:17:59.205411  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:59.205436  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:59.205935  123454 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0316 00:17:59.706543  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:58.613340  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:00.618869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:01.914835  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.914865  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:01.914879  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:01.972138  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.972173  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:02.206540  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.219111  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.219165  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:02.705639  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.709820  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.709850  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:03.206513  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:03.216320  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:18:03.224237  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:18:03.224263  123454 api_server.go:131] duration metric: took 4.018845389s to wait for apiserver health ...
	I0316 00:18:03.224272  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:18:03.224279  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:18:03.225951  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.777309  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.777625  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.227382  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:18:03.245892  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:18:03.267423  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:18:03.281349  123454 system_pods.go:59] 8 kube-system pods found
	I0316 00:18:03.281387  123454 system_pods.go:61] "coredns-76f75df574-d2f6z" [3cd22981-0f83-4a60-9930-c103cfc2d2ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:18:03.281397  123454 system_pods.go:61] "etcd-no-preload-238598" [d98fa5b6-ad24-4c90-98c8-9e5b8f1a3250] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:18:03.281408  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [e7d7a5a0-9a4f-4df2-aaf7-44c36e5bd313] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:18:03.281420  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [a198865e-0ed5-40b6-8b10-a4fccdefa059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:18:03.281434  123454 system_pods.go:61] "kube-proxy-cjhzn" [6529873c-cb9d-42d8-991d-e450783b1707] Running
	I0316 00:18:03.281443  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [bfb373fb-ec78-4ef1-b92e-3a8af3f805a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:18:03.281457  123454 system_pods.go:61] "metrics-server-57f55c9bc5-hffvp" [4181fe7f-3e95-455b-a744-8f4dca7b870d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:18:03.281466  123454 system_pods.go:61] "storage-provisioner" [d568ae10-7b9c-4c98-8263-a09505227ac7] Running
	I0316 00:18:03.281485  123454 system_pods.go:74] duration metric: took 14.043103ms to wait for pod list to return data ...
	I0316 00:18:03.281501  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:18:03.284899  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:18:03.284923  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:18:03.284934  123454 node_conditions.go:105] duration metric: took 3.425812ms to run NodePressure ...
	I0316 00:18:03.284955  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:18:03.562930  123454 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568376  123454 kubeadm.go:733] kubelet initialised
	I0316 00:18:03.568402  123454 kubeadm.go:734] duration metric: took 5.44437ms waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568412  123454 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:18:03.574420  123454 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:03.113622  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.613724  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:07.614087  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.278238  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.776236  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.582284  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.081679  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.082343  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.113282  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.114515  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.776835  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.777258  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.778115  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.582099  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:13.082243  123454 pod_ready.go:92] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:13.082263  123454 pod_ready.go:81] duration metric: took 9.507817974s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:13.082271  123454 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:15.088733  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.613599  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:16.614876  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.280289  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.777434  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:17.089800  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.092413  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.092441  123454 pod_ready.go:81] duration metric: took 6.010161958s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.092453  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.097972  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.097996  123454 pod_ready.go:81] duration metric: took 5.533097ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.098008  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102186  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.102204  123454 pod_ready.go:81] duration metric: took 4.187939ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102213  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106692  123454 pod_ready.go:92] pod "kube-proxy-cjhzn" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.106712  123454 pod_ready.go:81] duration metric: took 4.492665ms for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106720  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111735  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.111754  123454 pod_ready.go:81] duration metric: took 5.027601ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111764  123454 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.113278  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.114061  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:22.276633  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:24.278807  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.119790  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.618664  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.115414  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.613572  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:26.778891  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:29.277585  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.619282  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.118484  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.121236  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.114043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.119153  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.614043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:31.778203  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.276424  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.618082  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.619339  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.614209  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.113521  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:36.279218  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:38.779161  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.118552  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.619543  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.614042  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.113784  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:41.278664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:43.777450  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.119118  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.119473  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.614102  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:47.112496  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:46.277664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.279095  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:46.619201  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.619302  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:49.113616  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.613449  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:18:50.777409  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:52.779497  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.278072  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.119041  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:53.121052  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:54.113699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:56.613686  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:57.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.277696  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.618835  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.118984  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.119379  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.614207  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:01.113795  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:02.281155  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.779663  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:02.618637  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.619492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:03.613777  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.114458  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:07.276601  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.277239  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.619784  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.118699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:08.613361  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.615062  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:11.277319  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.777280  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:11.119614  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.618997  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.113490  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:15.613530  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:17.613578  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.276204  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.277156  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.118717  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.618005  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:19.614161  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.112808  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:20.777843  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.778609  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.780571  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:20.618505  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.619290  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.118778  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.113901  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:26.115541  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:27.277159  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:29.277242  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:27.618996  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:30.118650  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:28.614101  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.114366  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:31.776661  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.778372  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:32.125130  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:34.619153  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.114785  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.116692  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:37.613605  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:36.276574  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.276784  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:36.619780  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.619966  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:39.614178  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.616246  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:40.779366  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.277656  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.279201  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.118560  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.120706  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:44.113022  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:46.114296  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.778494  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.277998  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.619070  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.622001  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.118739  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:48.114952  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.614794  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:52.776113  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.777687  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:52.119145  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.619675  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:53.113139  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:55.113961  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.613751  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.277412  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.277555  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.119685  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.618622  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.614914  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:02.113286  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:01.777542  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.278277  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:01.618756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.119973  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.113918  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.613434  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:06.278976  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.778022  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.124642  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.618968  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.613517  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.613699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.613997  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:11.277492  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.777429  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.619721  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.120185  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.114540  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:17.614281  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.781621  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.277078  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.277734  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.620224  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.118862  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.118920  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.117088  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.614917  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:22.779251  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.276842  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.118990  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:24.119699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.114563  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.614869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.277136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.777082  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:26.619354  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:28.619489  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.619807  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:32.117311  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:32.277582  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.778394  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.622010  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:33.119518  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:35.119736  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.613788  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.277007  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.776793  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:37.121196  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.619239  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:38.616664  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:41.112900  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:41.777952  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.276802  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:42.119128  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.119255  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:43.114941  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:45.614095  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:47.616615  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.277300  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.777275  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.119389  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.618309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.116327  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.614990  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:50.777563  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.778761  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.276863  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.619469  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:53.119593  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.116184  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:57.613355  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:57.776955  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.276381  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.619683  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:58.122772  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:59.616518  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.115379  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.613248  123537 pod_ready.go:81] duration metric: took 4m0.006848891s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:02.613273  123537 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:02.613280  123537 pod_ready.go:38] duration metric: took 4m5.267062496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:02.613297  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:02.613347  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:02.613393  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:02.670107  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:02.670139  123537 cri.go:89] found id: ""
	I0316 00:21:02.670149  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:02.670210  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.675144  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:02.675212  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:02.720695  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:02.720720  123537 cri.go:89] found id: ""
	I0316 00:21:02.720729  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:02.720790  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.725490  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:02.725570  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.276825  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.779811  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.617765  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.619210  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.619603  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.778908  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:02.778959  123537 cri.go:89] found id: ""
	I0316 00:21:02.778971  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:02.779028  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.784772  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:02.784864  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:02.830682  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:02.830709  123537 cri.go:89] found id: ""
	I0316 00:21:02.830719  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:02.830784  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.835733  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:02.835813  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:02.875862  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:02.875890  123537 cri.go:89] found id: ""
	I0316 00:21:02.875902  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:02.875967  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.880801  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:02.880857  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:02.921585  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:02.921611  123537 cri.go:89] found id: ""
	I0316 00:21:02.921622  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:02.921689  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.929521  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:02.929593  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.977621  123537 cri.go:89] found id: ""
	I0316 00:21:02.977646  123537 logs.go:276] 0 containers: []
	W0316 00:21:02.977657  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.977668  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:02.977723  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:03.020159  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.020186  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.020193  123537 cri.go:89] found id: ""
	I0316 00:21:03.020204  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:03.020274  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.025593  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.030718  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:03.030744  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:03.090141  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:03.090182  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:03.147416  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:03.147466  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:03.189686  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:03.189733  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:03.245980  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:03.246020  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.296494  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:03.296534  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:03.349602  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:03.349635  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:03.364783  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:03.364819  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:03.513917  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:03.513955  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:03.567916  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:03.567952  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:03.607620  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:03.607658  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:03.658683  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:03.658717  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.699797  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:03.699827  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:06.715440  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:06.733725  123537 api_server.go:72] duration metric: took 4m16.598062692s to wait for apiserver process to appear ...
	I0316 00:21:06.733759  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:06.733810  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:06.733868  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:06.775396  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:06.775431  123537 cri.go:89] found id: ""
	I0316 00:21:06.775442  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:06.775506  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.780448  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:06.780503  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:06.836927  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:06.836962  123537 cri.go:89] found id: ""
	I0316 00:21:06.836972  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:06.837025  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.841803  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:06.841869  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:06.887445  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:06.887470  123537 cri.go:89] found id: ""
	I0316 00:21:06.887479  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:06.887534  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.892112  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:06.892192  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:06.936614  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:06.936642  123537 cri.go:89] found id: ""
	I0316 00:21:06.936653  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:06.936717  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.943731  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:06.943799  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:06.986738  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:06.986764  123537 cri.go:89] found id: ""
	I0316 00:21:06.986774  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:06.986843  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.991555  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:06.991621  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:07.052047  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:07.052074  123537 cri.go:89] found id: ""
	I0316 00:21:07.052082  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:07.052133  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.057297  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:07.057358  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:07.104002  123537 cri.go:89] found id: ""
	I0316 00:21:07.104034  123537 logs.go:276] 0 containers: []
	W0316 00:21:07.104042  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:07.104049  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:07.104113  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:07.148540  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:07.148562  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:07.148566  123537 cri.go:89] found id: ""
	I0316 00:21:07.148572  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:07.148620  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.153502  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.157741  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:07.157770  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:07.197856  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:07.197889  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:07.654282  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:07.654324  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:07.708539  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:07.708579  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:07.725072  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:07.725104  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.277657  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.780721  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.121773  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.619756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.862465  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:07.862498  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:07.925812  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:07.925846  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:07.986121  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:07.986152  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:08.036774  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:08.036817  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:08.091902  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:08.091933  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:08.142096  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:08.142128  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:08.210747  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:08.210789  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:08.270225  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:08.270259  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:10.817112  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:21:10.822359  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:21:10.823955  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:10.823978  123537 api_server.go:131] duration metric: took 4.090210216s to wait for apiserver health ...
	I0316 00:21:10.823988  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:10.824019  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:10.824076  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:10.872487  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:10.872514  123537 cri.go:89] found id: ""
	I0316 00:21:10.872524  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:10.872590  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.877131  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:10.877197  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:10.916699  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:10.916728  123537 cri.go:89] found id: ""
	I0316 00:21:10.916737  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:10.916797  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.921114  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:10.921182  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:10.964099  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:10.964123  123537 cri.go:89] found id: ""
	I0316 00:21:10.964132  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:10.964191  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.968716  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:10.968788  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.008883  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.008909  123537 cri.go:89] found id: ""
	I0316 00:21:11.008919  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:11.008974  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.014068  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.014138  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.067209  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.067239  123537 cri.go:89] found id: ""
	I0316 00:21:11.067251  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:11.067315  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.072536  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.072663  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.119366  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.119399  123537 cri.go:89] found id: ""
	I0316 00:21:11.119411  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:11.119462  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.124502  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.124590  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.169458  123537 cri.go:89] found id: ""
	I0316 00:21:11.169494  123537 logs.go:276] 0 containers: []
	W0316 00:21:11.169505  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.169513  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:11.169576  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:11.218886  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:11.218923  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:11.218928  123537 cri.go:89] found id: ""
	I0316 00:21:11.218938  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:11.219002  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.223583  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.228729  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:11.228753  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:11.282781  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:11.282818  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:11.347330  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:11.347379  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.401191  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:11.401225  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.453126  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:11.453158  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.523058  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.523110  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.944108  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.944157  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:12.001558  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:12.001602  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:12.062833  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:12.062885  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:12.078726  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:12.078762  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:12.209248  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:12.209284  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:12.251891  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:12.251930  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:12.296240  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:12.296271  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:14.846244  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:14.846274  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.846279  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.846283  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.846287  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.846290  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.846294  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.846299  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.846302  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.846309  123537 system_pods.go:74] duration metric: took 4.022315588s to wait for pod list to return data ...
	I0316 00:21:14.846317  123537 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:14.848830  123537 default_sa.go:45] found service account: "default"
	I0316 00:21:14.848852  123537 default_sa.go:55] duration metric: took 2.529805ms for default service account to be created ...
	I0316 00:21:14.848859  123537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:14.861369  123537 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:14.861396  123537 system_pods.go:89] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.861401  123537 system_pods.go:89] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.861405  123537 system_pods.go:89] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.861409  123537 system_pods.go:89] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.861448  123537 system_pods.go:89] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.861456  123537 system_pods.go:89] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.861465  123537 system_pods.go:89] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.861470  123537 system_pods.go:89] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.861478  123537 system_pods.go:126] duration metric: took 12.614437ms to wait for k8s-apps to be running ...
	I0316 00:21:14.861488  123537 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:14.861534  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:14.879439  123537 system_svc.go:56] duration metric: took 17.934537ms WaitForService to wait for kubelet
	I0316 00:21:14.879484  123537 kubeadm.go:576] duration metric: took 4m24.743827748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:14.879523  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:14.882642  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:14.882673  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:14.882716  123537 node_conditions.go:105] duration metric: took 3.184841ms to run NodePressure ...
	I0316 00:21:14.882733  123537 start.go:240] waiting for startup goroutines ...
	I0316 00:21:14.882749  123537 start.go:245] waiting for cluster config update ...
	I0316 00:21:14.882789  123537 start.go:254] writing updated cluster config ...
	I0316 00:21:14.883119  123537 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:14.937804  123537 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:14.939886  123537 out.go:177] * Done! kubectl is now configured to use "embed-certs-666637" cluster and "default" namespace by default
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:12.278383  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.279769  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:12.124356  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.619164  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:16.777597  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.277188  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.119492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.119935  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.278136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:22.779721  123819 pod_ready.go:81] duration metric: took 4m0.010022344s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:22.779752  123819 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:22.779762  123819 pod_ready.go:38] duration metric: took 4m5.913207723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:22.779779  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:22.779814  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:22.779876  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:22.836022  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:22.836058  123819 cri.go:89] found id: ""
	I0316 00:21:22.836069  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:22.836131  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.841289  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:22.841362  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:22.883980  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:22.884007  123819 cri.go:89] found id: ""
	I0316 00:21:22.884018  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:22.884084  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.889352  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:22.889427  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:22.929947  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:22.929977  123819 cri.go:89] found id: ""
	I0316 00:21:22.929987  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:22.930033  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.935400  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:22.935485  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:22.975548  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:22.975580  123819 cri.go:89] found id: ""
	I0316 00:21:22.975598  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:22.975671  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.981916  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:22.981998  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.019925  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.019965  123819 cri.go:89] found id: ""
	I0316 00:21:23.019977  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:23.020046  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.024870  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.024960  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.068210  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.068241  123819 cri.go:89] found id: ""
	I0316 00:21:23.068253  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:23.068344  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.073492  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.073578  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.113267  123819 cri.go:89] found id: ""
	I0316 00:21:23.113301  123819 logs.go:276] 0 containers: []
	W0316 00:21:23.113311  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.113319  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:23.113382  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:23.160155  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:23.160175  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.160179  123819 cri.go:89] found id: ""
	I0316 00:21:23.160192  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:23.160241  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.165125  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.169508  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:23.169530  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.218749  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:23.218786  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.274140  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:23.274177  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.320515  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:23.320559  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:23.835119  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:23.835173  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:23.907635  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.907691  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.925071  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:23.925126  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:23.991996  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:23.992028  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:24.032865  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.032899  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.090947  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:24.090987  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:24.285862  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:24.285896  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:24.337983  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:24.338027  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:24.379626  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:24.379657  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:21.618894  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:24.122648  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:26.918844  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.938014  123819 api_server.go:72] duration metric: took 4m17.276244s to wait for apiserver process to appear ...
	I0316 00:21:26.938053  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:26.938095  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:26.938157  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:26.983515  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:26.983538  123819 cri.go:89] found id: ""
	I0316 00:21:26.983546  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:26.983595  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:26.989278  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:26.989341  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:27.039968  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.040000  123819 cri.go:89] found id: ""
	I0316 00:21:27.040009  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:27.040078  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.045617  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:27.045687  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:27.085920  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.085948  123819 cri.go:89] found id: ""
	I0316 00:21:27.085960  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:27.086029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.090911  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:27.090989  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:27.137289  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:27.137322  123819 cri.go:89] found id: ""
	I0316 00:21:27.137333  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:27.137393  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.141956  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:27.142031  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:27.180823  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.180845  123819 cri.go:89] found id: ""
	I0316 00:21:27.180854  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:27.180919  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.185439  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:27.185523  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:27.225775  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:27.225797  123819 cri.go:89] found id: ""
	I0316 00:21:27.225805  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:27.225854  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.230648  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:27.230717  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:27.269429  123819 cri.go:89] found id: ""
	I0316 00:21:27.269465  123819 logs.go:276] 0 containers: []
	W0316 00:21:27.269477  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:27.269485  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:27.269550  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:27.308288  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.308316  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.308321  123819 cri.go:89] found id: ""
	I0316 00:21:27.308329  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:27.308378  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.312944  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.317794  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:27.317829  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:27.364287  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:27.364323  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.419482  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:27.419521  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.468553  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:27.468585  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.513287  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:27.513320  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.561382  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:27.561426  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.601292  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:27.601325  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:27.656848  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:27.656902  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:27.796212  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:27.796245  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:28.246569  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:28.246611  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:28.302971  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:28.303015  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:28.359613  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:28.359645  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:28.375844  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:28.375877  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:26.124217  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:28.619599  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:30.921320  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:21:30.926064  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:21:30.927332  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:30.927353  123819 api_server.go:131] duration metric: took 3.989292523s to wait for apiserver health ...
	I0316 00:21:30.927361  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:30.927386  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:30.927438  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:30.975348  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:30.975376  123819 cri.go:89] found id: ""
	I0316 00:21:30.975389  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:30.975459  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:30.980128  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:30.980194  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:31.029534  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.029563  123819 cri.go:89] found id: ""
	I0316 00:21:31.029574  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:31.029627  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.034066  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:31.034149  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:31.073857  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.073884  123819 cri.go:89] found id: ""
	I0316 00:21:31.073892  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:31.073961  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.078421  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:31.078501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:31.117922  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.117951  123819 cri.go:89] found id: ""
	I0316 00:21:31.117964  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:31.118029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.122435  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:31.122501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:31.161059  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.161089  123819 cri.go:89] found id: ""
	I0316 00:21:31.161101  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:31.161155  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.165503  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:31.165572  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:31.207637  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.207669  123819 cri.go:89] found id: ""
	I0316 00:21:31.207679  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:31.207742  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.212296  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:31.212360  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:31.251480  123819 cri.go:89] found id: ""
	I0316 00:21:31.251519  123819 logs.go:276] 0 containers: []
	W0316 00:21:31.251530  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:31.251539  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:31.251608  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:31.296321  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.296345  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.296350  123819 cri.go:89] found id: ""
	I0316 00:21:31.296357  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:31.296414  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.302159  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.306501  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:31.306526  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.348347  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:31.348379  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.388542  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:31.388573  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:31.439926  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:31.439962  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:31.499674  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:31.499711  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:31.552720  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:31.552771  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.605281  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:31.605331  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.651964  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:31.651997  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.696113  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:31.696150  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.749712  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:31.749751  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.801476  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:31.801508  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:32.236105  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:32.236146  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:32.253815  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:32.253848  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:34.930730  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:34.930759  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.930763  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.930767  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.930772  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.930775  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.930778  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.930783  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.930788  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.930798  123819 system_pods.go:74] duration metric: took 4.003426137s to wait for pod list to return data ...
	I0316 00:21:34.930807  123819 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:34.933462  123819 default_sa.go:45] found service account: "default"
	I0316 00:21:34.933492  123819 default_sa.go:55] duration metric: took 2.674728ms for default service account to be created ...
	I0316 00:21:34.933500  123819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:34.939351  123819 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:34.939382  123819 system_pods.go:89] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.939393  123819 system_pods.go:89] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.939400  123819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.939406  123819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.939414  123819 system_pods.go:89] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.939420  123819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.939442  123819 system_pods.go:89] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.939454  123819 system_pods.go:89] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.939469  123819 system_pods.go:126] duration metric: took 5.962328ms to wait for k8s-apps to be running ...
	I0316 00:21:34.939482  123819 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:34.939539  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:34.958068  123819 system_svc.go:56] duration metric: took 18.572929ms WaitForService to wait for kubelet
	I0316 00:21:34.958108  123819 kubeadm.go:576] duration metric: took 4m25.296341727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:34.958130  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:34.962603  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:34.962629  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:34.962641  123819 node_conditions.go:105] duration metric: took 4.505615ms to run NodePressure ...
	I0316 00:21:34.962657  123819 start.go:240] waiting for startup goroutines ...
	I0316 00:21:34.962667  123819 start.go:245] waiting for cluster config update ...
	I0316 00:21:34.962690  123819 start.go:254] writing updated cluster config ...
	I0316 00:21:34.963009  123819 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:35.015774  123819 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:35.019103  123819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-313436" cluster and "default" namespace by default
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:21:31.121456  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:33.122437  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:35.618906  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:37.619223  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:40.120743  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:42.619309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:44.619544  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:47.120179  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:49.619419  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:52.124510  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:54.125147  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:56.621651  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:59.120895  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:01.618287  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:03.620297  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:06.119870  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:08.122618  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:10.619464  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.121381  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:15.619590  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.122483  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:19.112568  123454 pod_ready.go:81] duration metric: took 4m0.000767313s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	E0316 00:22:19.112600  123454 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0316 00:22:19.112621  123454 pod_ready.go:38] duration metric: took 4m15.544198169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:22:19.112652  123454 kubeadm.go:591] duration metric: took 4m23.072115667s to restartPrimaryControlPlane
	W0316 00:22:19.112713  123454 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:22:19.112769  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:51.249327  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.136527598s)
	I0316 00:22:51.249406  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:22:51.268404  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:22:51.280832  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:22:51.292639  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:22:51.292661  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:22:51.292712  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:22:51.303272  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:22:51.303347  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:22:51.313854  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:22:51.324290  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:22:51.324361  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:22:51.334879  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.345302  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:22:51.345382  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.355682  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:22:51.366601  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:22:51.366660  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:22:51.377336  123454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:22:51.594624  123454 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:00.473055  123454 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0316 00:23:00.473140  123454 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:00.473255  123454 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:00.473415  123454 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:00.473551  123454 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:00.473682  123454 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:00.475591  123454 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:00.475704  123454 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:00.475803  123454 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:00.475905  123454 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:00.476001  123454 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:00.476100  123454 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:00.476190  123454 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:00.476281  123454 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:00.476378  123454 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:00.476516  123454 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:00.476647  123454 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:00.476715  123454 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:00.476801  123454 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:00.476879  123454 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:00.476968  123454 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0316 00:23:00.477042  123454 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:00.477166  123454 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:00.477253  123454 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:00.477378  123454 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:00.477480  123454 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:00.479084  123454 out.go:204]   - Booting up control plane ...
	I0316 00:23:00.479206  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:00.479332  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:00.479440  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:00.479541  123454 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:00.479625  123454 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:00.479697  123454 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:00.479874  123454 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:23:00.479994  123454 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003092 seconds
	I0316 00:23:00.480139  123454 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:23:00.480339  123454 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:23:00.480445  123454 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:23:00.480687  123454 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-238598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:23:00.480789  123454 kubeadm.go:309] [bootstrap-token] Using token: aspuu8.i4yhgkjx7e43mgmn
	I0316 00:23:00.482437  123454 out.go:204]   - Configuring RBAC rules ...
	I0316 00:23:00.482568  123454 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:23:00.482697  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:23:00.482917  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:23:00.483119  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:23:00.483283  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:23:00.483406  123454 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:23:00.483582  123454 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:23:00.483653  123454 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:23:00.483714  123454 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:23:00.483720  123454 kubeadm.go:309] 
	I0316 00:23:00.483815  123454 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:23:00.483833  123454 kubeadm.go:309] 
	I0316 00:23:00.483973  123454 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:23:00.483986  123454 kubeadm.go:309] 
	I0316 00:23:00.484014  123454 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:23:00.484119  123454 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:23:00.484200  123454 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:23:00.484211  123454 kubeadm.go:309] 
	I0316 00:23:00.484283  123454 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:23:00.484288  123454 kubeadm.go:309] 
	I0316 00:23:00.484360  123454 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:23:00.484366  123454 kubeadm.go:309] 
	I0316 00:23:00.484452  123454 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:23:00.484560  123454 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:23:00.484657  123454 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:23:00.484666  123454 kubeadm.go:309] 
	I0316 00:23:00.484798  123454 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:23:00.484920  123454 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:23:00.484932  123454 kubeadm.go:309] 
	I0316 00:23:00.485053  123454 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485196  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:23:00.485227  123454 kubeadm.go:309] 	--control-plane 
	I0316 00:23:00.485241  123454 kubeadm.go:309] 
	I0316 00:23:00.485357  123454 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:23:00.485367  123454 kubeadm.go:309] 
	I0316 00:23:00.485488  123454 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485646  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:23:00.485661  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:23:00.485671  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:23:00.487417  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:23:00.489063  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:23:00.526147  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:23:00.571796  123454 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-238598 minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=no-preload-238598 minikube.k8s.io/primary=true
	I0316 00:23:00.892908  123454 ops.go:34] apiserver oom_adj: -16
	I0316 00:23:00.892994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.394077  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.893097  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.393114  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.893994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.393930  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.893428  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.393822  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.893810  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.393999  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.893998  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.393104  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.893725  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.393873  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.893432  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.394054  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.893595  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.393109  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.893621  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.393322  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.894024  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.393711  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.893465  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.393059  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.497890  123454 kubeadm.go:1107] duration metric: took 11.926069028s to wait for elevateKubeSystemPrivileges
	W0316 00:23:12.497951  123454 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:23:12.497962  123454 kubeadm.go:393] duration metric: took 5m16.508852945s to StartCluster
	I0316 00:23:12.497988  123454 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.498139  123454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:23:12.500632  123454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.500995  123454 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:23:12.502850  123454 out.go:177] * Verifying Kubernetes components...
	I0316 00:23:12.501089  123454 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:23:12.501233  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:23:12.504432  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:23:12.504443  123454 addons.go:69] Setting storage-provisioner=true in profile "no-preload-238598"
	I0316 00:23:12.504491  123454 addons.go:234] Setting addon storage-provisioner=true in "no-preload-238598"
	I0316 00:23:12.504502  123454 addons.go:69] Setting default-storageclass=true in profile "no-preload-238598"
	I0316 00:23:12.504515  123454 addons.go:69] Setting metrics-server=true in profile "no-preload-238598"
	I0316 00:23:12.504526  123454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-238598"
	I0316 00:23:12.504541  123454 addons.go:234] Setting addon metrics-server=true in "no-preload-238598"
	W0316 00:23:12.504551  123454 addons.go:243] addon metrics-server should already be in state true
	I0316 00:23:12.504582  123454 host.go:66] Checking if "no-preload-238598" exists ...
	W0316 00:23:12.504505  123454 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:23:12.504656  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.504996  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505012  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.505013  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505229  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.521634  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0316 00:23:12.521698  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0316 00:23:12.522283  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522377  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522836  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.522861  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.522990  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.523032  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.523203  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523375  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523737  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.523758  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524232  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.524277  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524695  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0316 00:23:12.525112  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.525610  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.525637  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.526025  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.526218  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.530010  123454 addons.go:234] Setting addon default-storageclass=true in "no-preload-238598"
	W0316 00:23:12.530029  123454 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:23:12.530053  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.530277  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.530315  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.540310  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0316 00:23:12.545850  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.545966  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0316 00:23:12.546335  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.546740  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.546761  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.547035  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.547232  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.548605  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.548626  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.549001  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.549058  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0316 00:23:12.549268  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.549323  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.549454  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.551419  123454 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:23:12.549975  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.551115  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.553027  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:23:12.553050  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:23:12.553074  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.553082  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.554948  123454 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:23:12.553404  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.556096  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556544  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.556568  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556640  123454 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.556660  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:23:12.556679  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.556769  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.557150  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.557176  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.557398  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.557600  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.557886  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.560220  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560555  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.560582  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560759  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.560982  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.561157  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.561318  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.574877  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0316 00:23:12.575802  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.576313  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.576337  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.576640  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.577015  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.578483  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.578814  123454 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.578835  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:23:12.578856  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.581832  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582439  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.582454  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.582465  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582635  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.582819  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.582969  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.729051  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:23:12.747162  123454 node_ready.go:35] waiting up to 6m0s for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.759957  123454 node_ready.go:49] node "no-preload-238598" has status "Ready":"True"
	I0316 00:23:12.759992  123454 node_ready.go:38] duration metric: took 12.79378ms for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.760006  123454 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.772201  123454 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795626  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.795660  123454 pod_ready.go:81] duration metric: took 23.429082ms for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795674  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808661  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.808688  123454 pod_ready.go:81] duration metric: took 13.006568ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808699  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821578  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.821613  123454 pod_ready.go:81] duration metric: took 12.904651ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821627  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.832585  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:23:12.832616  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:23:12.838375  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.838404  123454 pod_ready.go:81] duration metric: took 16.768452ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.838415  123454 pod_ready.go:38] duration metric: took 78.396172ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.838435  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:23:12.838522  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:23:12.889063  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.907225  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.924533  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:23:12.924565  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:23:12.947224  123454 api_server.go:72] duration metric: took 446.183679ms to wait for apiserver process to appear ...
	I0316 00:23:12.947257  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:23:12.947281  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:23:12.975463  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:12.975495  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:23:13.023702  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:23:13.039598  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:23:13.039638  123454 api_server.go:131] duration metric: took 92.372403ms to wait for apiserver health ...
	I0316 00:23:13.039649  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:23:13.069937  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:13.141358  123454 system_pods.go:59] 5 kube-system pods found
	I0316 00:23:13.141387  123454 system_pods.go:61] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.141391  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.141397  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.141400  123454 system_pods.go:61] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending
	I0316 00:23:13.141404  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.141411  123454 system_pods.go:74] duration metric: took 101.754765ms to wait for pod list to return data ...
	I0316 00:23:13.141419  123454 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:23:13.200153  123454 default_sa.go:45] found service account: "default"
	I0316 00:23:13.200193  123454 default_sa.go:55] duration metric: took 58.765381ms for default service account to be created ...
	I0316 00:23:13.200205  123454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:23:13.381398  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381431  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.381771  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.381825  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.381840  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.381849  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381862  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.382154  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.382159  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.382189  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.383303  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.383345  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.383353  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending
	I0316 00:23:13.383360  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.383368  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.383374  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.383384  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.383396  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.383440  123454 retry.go:31] will retry after 221.286986ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.408809  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.408839  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.409146  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.409191  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.409195  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.612171  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.612205  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612212  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612221  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.612226  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.612230  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.612236  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.612239  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.612260  123454 retry.go:31] will retry after 311.442515ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.934136  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.934170  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934177  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934185  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.934191  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.934197  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.934204  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.934210  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.934234  123454 retry.go:31] will retry after 453.147474ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.343055  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.435784176s)
	I0316 00:23:14.343123  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343139  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343497  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343523  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.343540  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343554  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343800  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.343876  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343895  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.404681  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.404725  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404738  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404748  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.404758  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.404767  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.404777  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.404790  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.404810  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.404821  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending
	I0316 00:23:14.404846  123454 retry.go:31] will retry after 464.575803ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.447649  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.377663696s)
	I0316 00:23:14.447706  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.447724  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448062  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448083  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448092  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.448100  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448367  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.448367  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448394  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448407  123454 addons.go:470] Verifying addon metrics-server=true in "no-preload-238598"
	I0316 00:23:14.450675  123454 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0316 00:23:14.452378  123454 addons.go:505] duration metric: took 1.951301533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0316 00:23:14.888167  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.888206  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:14.888219  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.888226  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.888236  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.888243  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.888252  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.888260  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.888292  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.888301  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:14.888325  123454 retry.go:31] will retry after 490.515879ms: missing components: kube-proxy
	I0316 00:23:15.389667  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:15.389694  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:15.389700  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Running
	I0316 00:23:15.389704  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:15.389708  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:15.389712  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:15.389716  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Running
	I0316 00:23:15.389721  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:15.389728  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:15.389735  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:15.389745  123454 system_pods.go:126] duration metric: took 2.189532563s to wait for k8s-apps to be running ...
	I0316 00:23:15.389757  123454 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:23:15.389805  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:15.409241  123454 system_svc.go:56] duration metric: took 19.469575ms WaitForService to wait for kubelet
	I0316 00:23:15.409273  123454 kubeadm.go:576] duration metric: took 2.908240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:23:15.409292  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:23:15.412530  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:23:15.412559  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:23:15.412570  123454 node_conditions.go:105] duration metric: took 3.272979ms to run NodePressure ...
	I0316 00:23:15.412585  123454 start.go:240] waiting for startup goroutines ...
	I0316 00:23:15.412594  123454 start.go:245] waiting for cluster config update ...
	I0316 00:23:15.412608  123454 start.go:254] writing updated cluster config ...
	I0316 00:23:15.412923  123454 ssh_runner.go:195] Run: rm -f paused
	I0316 00:23:15.468245  123454 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 00:23:15.470311  123454 out.go:177] * Done! kubectl is now configured to use "no-preload-238598" cluster and "default" namespace by default
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 
	
	
	==> CRI-O <==
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.705799892Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=72eaf93b-c366-43eb-84be-851b702d5396 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.707082738Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=799a0224-22c2-46a7-b00c-d5d8a5958a45 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.707674226Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549137707651861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=799a0224-22c2-46a7-b00c-d5d8a5958a45 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.708265739Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a98c29d-9f05-4a45-a7c2-a9e82b882328 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.708316037Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a98c29d-9f05-4a45-a7c2-a9e82b882328 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.708626614Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a98c29d-9f05-4a45-a7c2-a9e82b882328 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.749984016Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=465ccd7b-cc9e-4884-be60-beb5a720d1a7 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.750082760Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=465ccd7b-cc9e-4884-be60-beb5a720d1a7 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.751758473Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=108c2c91-ae8e-476e-bbde-5660e425cae3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.752241188Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549137752215605,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=108c2c91-ae8e-476e-bbde-5660e425cae3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.753239064Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=853a2d57-9888-4137-895e-7143040bf159 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.753309521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=853a2d57-9888-4137-895e-7143040bf159 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.753498988Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=853a2d57-9888-4137-895e-7143040bf159 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.796422542Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=622d9577-73bd-43bb-9ecd-a6108a0949c7 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.796512433Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=622d9577-73bd-43bb-9ecd-a6108a0949c7 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.797681712Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d35fbd7-4829-43dd-86bf-11ea7aecaa20 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.798793179Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549137798767297,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d35fbd7-4829-43dd-86bf-11ea7aecaa20 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.799577742Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=525c5e25-972f-4fd0-a493-2c289b4fd073 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.799651551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=525c5e25-972f-4fd0-a493-2c289b4fd073 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.799872039Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=525c5e25-972f-4fd0-a493-2c289b4fd073 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.814549866Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=73c41609-13fa-40ff-956b-d62164ffbfdf name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.815539631Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:60914654-d240-4165-b045-5b411d99e2e0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548594678579062,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-16T00:23:14.365343803Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53e9123ea07f2ffffade62cee18a78a978bbcc89eb74898d2ba61ddaa864d44a,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-j5k5h,Uid:cbdf6082-83fb-4af6-95e9-90545e64c898,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548594471741848,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-j5k5h,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbdf6082-83fb-4af6-95e9-90545e64c898
,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:23:14.163693570Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-wg5c8,Uid:a7347306-ab8d-42d0-935c-98f98192e6b7,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548593625470210,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:23:13.314398913Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&PodSandboxMetadata{Name:coredns-76f75df574-5drh8,Uid:e86d8e6a-f832-4364-
ac68-c69e40b92523,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548593563286218,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e86d8e6a-f832-4364-ac68-c69e40b92523,k8s-app: kube-dns,pod-template-hash: 76f75df574,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:23:13.255439668Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&PodSandboxMetadata{Name:kube-proxy-h6p8x,Uid:738ca90e-7f8a-4449-8e5b-df714ee8320a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548593203713061,Labels:map[string]string{controller-revision-hash: 79c5f556d9,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-16T00:23:12.883772963Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-238598,Uid:5a52dc9965dac3768fd9feb58806b292,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548573831903052,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5a52dc9965dac3768fd9feb58806b292,kubernetes.io/config.seen: 2024-03-16T00:22:53.362986073Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&PodSandboxMetadata{Name:kube-controller-m
anager-no-preload-238598,Uid:2f841aae5b305433b44fb61546ba3c06,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548573829280369,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 2f841aae5b305433b44fb61546ba3c06,kubernetes.io/config.seen: 2024-03-16T00:22:53.362985333Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-238598,Uid:2ad073ffe7a3e400ba4e3a87cafbed54,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548573825746437,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-pr
eload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.137:8443,kubernetes.io/config.hash: 2ad073ffe7a3e400ba4e3a87cafbed54,kubernetes.io/config.seen: 2024-03-16T00:22:53.362984225Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-238598,Uid:fe40ef889eafc3500f2f54a30348e295,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710548573824876483,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.137:237
9,kubernetes.io/config.hash: fe40ef889eafc3500f2f54a30348e295,kubernetes.io/config.seen: 2024-03-16T00:22:53.362980708Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=73c41609-13fa-40ff-956b-d62164ffbfdf name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.816407175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=99813e78-f07e-4d5a-b237-bf0f5f05ff8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.816495666Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=99813e78-f07e-4d5a-b237-bf0f5f05ff8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:32:17 no-preload-238598 crio[695]: time="2024-03-16 00:32:17.816698208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=99813e78-f07e-4d5a-b237-bf0f5f05ff8e name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	785ab7a84aef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   301a783629bcc       storage-provisioner
	384d72cd0e231       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   a21b9eed0e132       coredns-76f75df574-wg5c8
	f77f69c426101       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   9bdf745b2ba2e       coredns-76f75df574-5drh8
	5d72a7cc21406       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   49b058addd673       kube-proxy-h6p8x
	88a3af391e8b6       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   0792bffad2469       etcd-no-preload-238598
	b603efc4e9e65       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   98f30e5ddf8e2       kube-controller-manager-no-preload-238598
	4ff55775eeb84       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   16957d5bf895d       kube-scheduler-no-preload-238598
	11395c3995c48       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   f06443b6f5e5c       kube-apiserver-no-preload-238598
	
	
	==> coredns [384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-238598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-238598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=no-preload-238598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:22:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-238598
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:32:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:28:26 +0000   Sat, 16 Mar 2024 00:22:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:28:26 +0000   Sat, 16 Mar 2024 00:22:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:28:26 +0000   Sat, 16 Mar 2024 00:22:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:28:26 +0000   Sat, 16 Mar 2024 00:23:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.137
	  Hostname:    no-preload-238598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8403e337d3114d4f95c9de93d0441895
	  System UUID:                8403e337-d311-4d4f-95c9-de93d0441895
	  Boot ID:                    80bd4afb-43a5-4e2c-b6c7-cd172769a008
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-5drh8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 coredns-76f75df574-wg5c8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m5s
	  kube-system                 etcd-no-preload-238598                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-no-preload-238598             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-no-preload-238598    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-h6p8x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	  kube-system                 kube-scheduler-no-preload-238598             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-j5k5h              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m3s   kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node no-preload-238598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node no-preload-238598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node no-preload-238598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             9m18s  kubelet          Node no-preload-238598 status is now: NodeNotReady
	  Normal  NodeReady                9m8s   kubelet          Node no-preload-238598 status is now: NodeReady
	  Normal  RegisteredNode           9m7s   node-controller  Node no-preload-238598 event: Registered Node no-preload-238598 in Controller
	
	
	==> dmesg <==
	[  +0.056167] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044615] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.003320] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.845161] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.768343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.241338] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.057078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065533] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.200030] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.109040] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.276038] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[ +17.291274] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.063342] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.424231] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[Mar16 00:18] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.419587] kauditd_printk_skb: 69 callbacks suppressed
	[Mar16 00:22] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.580029] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +4.608804] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.700655] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	[Mar16 00:23] systemd-fstab-generator[4344]: Ignoring "noauto" option for root device
	[  +0.088538] kauditd_printk_skb: 14 callbacks suppressed
	[Mar16 00:24] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4] <==
	{"level":"info","ts":"2024-03-16T00:22:54.499485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 switched to configuration voters=(6048867247869148306)"}
	{"level":"info","ts":"2024-03-16T00:22:54.501553Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7ac1a4431768b343","local-member-id":"53f1e4b6b2bc3c92","added-peer-id":"53f1e4b6b2bc3c92","added-peer-peer-urls":["https://192.168.50.137:2380"]}
	{"level":"info","ts":"2024-03-16T00:22:54.500089Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-16T00:22:54.501813Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"53f1e4b6b2bc3c92","initial-advertise-peer-urls":["https://192.168.50.137:2380"],"listen-peer-urls":["https://192.168.50.137:2380"],"advertise-client-urls":["https://192.168.50.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-16T00:22:54.50457Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-16T00:22:54.500194Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-16T00:22:54.504689Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-16T00:22:55.303503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-16T00:22:55.30357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-16T00:22:55.3036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgPreVoteResp from 53f1e4b6b2bc3c92 at term 1"}
	{"level":"info","ts":"2024-03-16T00:22:55.303613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became candidate at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.303618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgVoteResp from 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.303626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became leader at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.303634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 53f1e4b6b2bc3c92 elected leader 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.308355Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"53f1e4b6b2bc3c92","local-member-attributes":"{Name:no-preload-238598 ClientURLs:[https://192.168.50.137:2379]}","request-path":"/0/members/53f1e4b6b2bc3c92/attributes","cluster-id":"7ac1a4431768b343","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:22:55.308417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:22:55.308484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:22:55.319804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.137:2379"}
	{"level":"info","ts":"2024-03-16T00:22:55.320178Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.321513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:22:55.321564Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:22:55.324208Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7ac1a4431768b343","local-member-id":"53f1e4b6b2bc3c92","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.324308Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.32436Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.325775Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 00:32:18 up 15 min,  0 users,  load average: 0.11, 0.17, 0.15
	Linux no-preload-238598 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6] <==
	I0316 00:26:14.986243       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:27:56.932493       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:56.932684       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0316 00:27:57.933052       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:57.933166       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:27:57.933177       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:27:57.933281       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:27:57.933381       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:27:57.934648       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:28:57.933447       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:28:57.933671       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:28:57.933700       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:28:57.934950       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:28:57.935055       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:28:57.935134       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:30:57.934250       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:30:57.934560       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:30:57.934589       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:30:57.935822       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:30:57.935954       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:30:57.935980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df] <==
	I0316 00:26:42.602874       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:27:12.139600       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:27:12.613298       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:27:42.146586       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:27:42.623861       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:28:12.152475       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:28:12.632645       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:28:42.158316       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:28:42.641647       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:29:10.694182       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="299.481µs"
	E0316 00:29:12.164321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:29:12.651014       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:29:23.693573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="187.686µs"
	E0316 00:29:42.171178       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:29:42.660040       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:30:12.177291       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:30:12.670311       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:30:42.183422       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:30:42.683174       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:31:12.189075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:31:12.691461       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:31:42.196025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:31:42.700682       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:32:12.202641       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:32:12.710881       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c] <==
	I0316 00:23:13.893839       1 server_others.go:72] "Using iptables proxy"
	I0316 00:23:14.022855       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.137"]
	I0316 00:23:14.462251       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0316 00:23:14.462309       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:23:14.462328       1 server_others.go:168] "Using iptables Proxier"
	I0316 00:23:14.478738       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:23:14.478993       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0316 00:23:14.479030       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:23:14.488714       1 config.go:188] "Starting service config controller"
	I0316 00:23:14.488770       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:23:14.488789       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:23:14.488793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:23:14.501434       1 config.go:315] "Starting node config controller"
	I0316 00:23:14.501481       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:23:14.591215       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:23:14.591274       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:23:14.601664       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc] <==
	W0316 00:22:56.969783       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0316 00:22:56.969790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0316 00:22:56.971466       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0316 00:22:56.971505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0316 00:22:57.774650       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0316 00:22:57.774703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0316 00:22:57.777042       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0316 00:22:57.777089       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:22:57.783440       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:57.783487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:57.812324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0316 00:22:57.812352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0316 00:22:57.906439       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:57.906489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:57.932445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0316 00:22:57.932553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0316 00:22:58.059189       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:58.059313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:58.074720       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0316 00:22:58.074769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0316 00:22:58.154664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:58.154721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:58.237807       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0316 00:22:58.237873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0316 00:23:00.949011       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:30:00 no-preload-238598 kubelet[4166]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:30:00 no-preload-238598 kubelet[4166]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:30:00 no-preload-238598 kubelet[4166]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:30:00 no-preload-238598 kubelet[4166]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:30:02 no-preload-238598 kubelet[4166]: E0316 00:30:02.675285    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:30:17 no-preload-238598 kubelet[4166]: E0316 00:30:17.675466    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:30:28 no-preload-238598 kubelet[4166]: E0316 00:30:28.677243    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:30:41 no-preload-238598 kubelet[4166]: E0316 00:30:41.675172    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:30:52 no-preload-238598 kubelet[4166]: E0316 00:30:52.675571    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:31:00 no-preload-238598 kubelet[4166]: E0316 00:31:00.712443    4166 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:31:00 no-preload-238598 kubelet[4166]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:31:00 no-preload-238598 kubelet[4166]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:31:00 no-preload-238598 kubelet[4166]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:31:00 no-preload-238598 kubelet[4166]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:31:06 no-preload-238598 kubelet[4166]: E0316 00:31:06.676264    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:31:18 no-preload-238598 kubelet[4166]: E0316 00:31:18.675076    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:31:29 no-preload-238598 kubelet[4166]: E0316 00:31:29.674203    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:31:44 no-preload-238598 kubelet[4166]: E0316 00:31:44.674751    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:31:59 no-preload-238598 kubelet[4166]: E0316 00:31:59.674914    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:32:00 no-preload-238598 kubelet[4166]: E0316 00:32:00.712034    4166 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:32:00 no-preload-238598 kubelet[4166]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:32:00 no-preload-238598 kubelet[4166]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:32:00 no-preload-238598 kubelet[4166]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:32:00 no-preload-238598 kubelet[4166]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:32:13 no-preload-238598 kubelet[4166]: E0316 00:32:13.674212    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	
	
	==> storage-provisioner [785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a] <==
	I0316 00:23:14.976879       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:23:14.986757       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:23:14.987058       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:23:14.995242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:23:14.995704       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-238598_9d73a6bf-16e1-40ec-b905-c21f9b3c4d26!
	I0316 00:23:15.001432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef3c01a2-febc-4977-aec7-0a7a64617505", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-238598_9d73a6bf-16e1-40ec-b905-c21f9b3c4d26 became leader
	I0316 00:23:15.096744       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-238598_9d73a6bf-16e1-40ec-b905-c21f9b3c4d26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-238598 -n no-preload-238598
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-238598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-j5k5h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-238598 describe pod metrics-server-57f55c9bc5-j5k5h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-238598 describe pod metrics-server-57f55c9bc5-j5k5h: exit status 1 (68.307557ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-j5k5h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-238598 describe pod metrics-server-57f55c9bc5-j5k5h: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
E0316 00:28:58.906448   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
E0316 00:29:08.402528   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
E0316 00:32:01.953493   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
E0316 00:33:58.906116   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
E0316 00:34:08.402464   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (240.107658ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-402923" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (241.499632ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-402923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-402923 logs -n 25: (1.563926862s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-313368 ssh                                | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:13:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:00.891560  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:13:06.971548  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:10.043616  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:16.123615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:19.195641  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:25.275569  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:28.347627  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:34.427628  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:37.499621  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:43.579636  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:46.651611  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:52.731602  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:55.803555  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:01.883545  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:04.955579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:11.035610  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:14.107615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:20.187606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:23.259572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:29.339575  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:32.411617  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:38.491587  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:41.563659  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:47.643582  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:50.715565  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:56.795596  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:59.867614  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:05.947572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:09.019585  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:15.099606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:18.171563  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:24.251589  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:27.323592  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:33.403599  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:36.475652  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:42.555600  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:45.627577  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:51.707630  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:54.779625  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:00.859579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:03.931626  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:10.011762  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:13.083615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:16.087122  123537 start.go:364] duration metric: took 4m28.254030119s to acquireMachinesLock for "embed-certs-666637"
	I0316 00:16:16.087211  123537 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:16.087224  123537 fix.go:54] fixHost starting: 
	I0316 00:16:16.087613  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:16.087653  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:16.102371  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0316 00:16:16.102813  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:16.103305  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:16.103343  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:16.103693  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:16.103874  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:16.104010  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:16.105752  123537 fix.go:112] recreateIfNeeded on embed-certs-666637: state=Stopped err=<nil>
	I0316 00:16:16.105780  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	W0316 00:16:16.105959  123537 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:16.107881  123537 out.go:177] * Restarting existing kvm2 VM for "embed-certs-666637" ...
	I0316 00:16:16.109056  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Start
	I0316 00:16:16.109231  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring networks are active...
	I0316 00:16:16.110036  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network default is active
	I0316 00:16:16.110372  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network mk-embed-certs-666637 is active
	I0316 00:16:16.110782  123537 main.go:141] libmachine: (embed-certs-666637) Getting domain xml...
	I0316 00:16:16.111608  123537 main.go:141] libmachine: (embed-certs-666637) Creating domain...
	I0316 00:16:17.296901  123537 main.go:141] libmachine: (embed-certs-666637) Waiting to get IP...
	I0316 00:16:17.297746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.298129  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.298317  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.298111  124543 retry.go:31] will retry after 269.98852ms: waiting for machine to come up
	I0316 00:16:17.569866  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.570322  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.570349  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.570278  124543 retry.go:31] will retry after 244.711835ms: waiting for machine to come up
	I0316 00:16:16.084301  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:16.084359  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084699  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:16:16.084726  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084970  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:16:16.086868  123454 machine.go:97] duration metric: took 4m35.39093995s to provisionDockerMachine
	I0316 00:16:16.087007  123454 fix.go:56] duration metric: took 4m35.413006758s for fixHost
	I0316 00:16:16.087038  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 4m35.413320023s
	W0316 00:16:16.087068  123454 start.go:713] error starting host: provision: host is not running
	W0316 00:16:16.087236  123454 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0316 00:16:16.087249  123454 start.go:728] Will try again in 5 seconds ...
	I0316 00:16:17.816747  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.817165  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.817196  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.817109  124543 retry.go:31] will retry after 326.155242ms: waiting for machine to come up
	I0316 00:16:18.144611  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.145047  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.145081  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.145000  124543 retry.go:31] will retry after 464.805158ms: waiting for machine to come up
	I0316 00:16:18.611746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.612105  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.612140  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.612039  124543 retry.go:31] will retry after 593.718495ms: waiting for machine to come up
	I0316 00:16:19.208024  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.208444  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.208476  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.208379  124543 retry.go:31] will retry after 772.07702ms: waiting for machine to come up
	I0316 00:16:19.982326  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.982800  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.982827  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.982706  124543 retry.go:31] will retry after 846.887476ms: waiting for machine to come up
	I0316 00:16:20.830726  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:20.831144  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:20.831168  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:20.831098  124543 retry.go:31] will retry after 1.274824907s: waiting for machine to come up
	I0316 00:16:22.107855  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:22.108252  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:22.108278  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:22.108209  124543 retry.go:31] will retry after 1.41217789s: waiting for machine to come up
	I0316 00:16:21.088013  123454 start.go:360] acquireMachinesLock for no-preload-238598: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:23.522725  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:23.523143  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:23.523179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:23.523094  124543 retry.go:31] will retry after 1.567285216s: waiting for machine to come up
	I0316 00:16:25.092539  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:25.092954  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:25.092981  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:25.092941  124543 retry.go:31] will retry after 2.260428679s: waiting for machine to come up
	I0316 00:16:27.354650  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:27.355051  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:27.355082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:27.354990  124543 retry.go:31] will retry after 2.402464465s: waiting for machine to come up
	I0316 00:16:29.758774  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:29.759220  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:29.759253  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:29.759176  124543 retry.go:31] will retry after 3.63505234s: waiting for machine to come up
	I0316 00:16:34.648552  123819 start.go:364] duration metric: took 4m4.062008179s to acquireMachinesLock for "default-k8s-diff-port-313436"
	I0316 00:16:34.648628  123819 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:34.648638  123819 fix.go:54] fixHost starting: 
	I0316 00:16:34.649089  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:34.649134  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:34.667801  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0316 00:16:34.668234  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:34.668737  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:16:34.668768  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:34.669123  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:34.669349  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:34.669552  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:16:34.671100  123819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313436: state=Stopped err=<nil>
	I0316 00:16:34.671139  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	W0316 00:16:34.671297  123819 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:34.673738  123819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-313436" ...
	I0316 00:16:34.675120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Start
	I0316 00:16:34.675292  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring networks are active...
	I0316 00:16:34.676038  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network default is active
	I0316 00:16:34.676427  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network mk-default-k8s-diff-port-313436 is active
	I0316 00:16:34.676855  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Getting domain xml...
	I0316 00:16:34.677501  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Creating domain...
	I0316 00:16:33.397686  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398274  123537 main.go:141] libmachine: (embed-certs-666637) Found IP for machine: 192.168.61.91
	I0316 00:16:33.398301  123537 main.go:141] libmachine: (embed-certs-666637) Reserving static IP address...
	I0316 00:16:33.398319  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has current primary IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398829  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.398859  123537 main.go:141] libmachine: (embed-certs-666637) DBG | skip adding static IP to network mk-embed-certs-666637 - found existing host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"}
	I0316 00:16:33.398883  123537 main.go:141] libmachine: (embed-certs-666637) Reserved static IP address: 192.168.61.91
	I0316 00:16:33.398896  123537 main.go:141] libmachine: (embed-certs-666637) Waiting for SSH to be available...
	I0316 00:16:33.398905  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Getting to WaitForSSH function...
	I0316 00:16:33.401376  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.401835  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.401872  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.402054  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH client type: external
	I0316 00:16:33.402082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa (-rw-------)
	I0316 00:16:33.402113  123537 main.go:141] libmachine: (embed-certs-666637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:33.402141  123537 main.go:141] libmachine: (embed-certs-666637) DBG | About to run SSH command:
	I0316 00:16:33.402188  123537 main.go:141] libmachine: (embed-certs-666637) DBG | exit 0
	I0316 00:16:33.523353  123537 main.go:141] libmachine: (embed-certs-666637) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:33.523747  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetConfigRaw
	I0316 00:16:33.524393  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.526639  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527046  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.527080  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527278  123537 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:16:33.527509  123537 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:33.527527  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:33.527766  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.529906  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.530210  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530341  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.530596  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530816  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530953  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.531119  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.531334  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.531348  123537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:33.635573  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:33.635601  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.635879  123537 buildroot.go:166] provisioning hostname "embed-certs-666637"
	I0316 00:16:33.635905  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.636109  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.638998  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639369  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.639417  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639629  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.639795  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.639971  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.640103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.640366  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.640524  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.640543  123537 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-666637 && echo "embed-certs-666637" | sudo tee /etc/hostname
	I0316 00:16:33.757019  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-666637
	
	I0316 00:16:33.757049  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.759808  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760120  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.760154  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760375  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.760583  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760723  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760829  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.760951  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.761121  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.761144  123537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-666637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-666637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-666637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:33.873548  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:33.873587  123537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:33.873642  123537 buildroot.go:174] setting up certificates
	I0316 00:16:33.873654  123537 provision.go:84] configureAuth start
	I0316 00:16:33.873666  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.873986  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.876609  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.876976  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.877004  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.877194  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.879624  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880156  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.880185  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880300  123537 provision.go:143] copyHostCerts
	I0316 00:16:33.880359  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:33.880370  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:33.880441  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:33.880526  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:33.880534  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:33.880558  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:33.880625  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:33.880632  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:33.880653  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:33.880707  123537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.embed-certs-666637 san=[127.0.0.1 192.168.61.91 embed-certs-666637 localhost minikube]
	I0316 00:16:33.984403  123537 provision.go:177] copyRemoteCerts
	I0316 00:16:33.984471  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:33.984499  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.987297  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987711  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.987741  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987894  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.988108  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.988284  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.988456  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.069540  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:34.094494  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 00:16:34.119198  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:34.144669  123537 provision.go:87] duration metric: took 271.000471ms to configureAuth
	I0316 00:16:34.144701  123537 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:34.144891  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:34.144989  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.148055  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148464  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.148496  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148710  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.148918  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149097  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149251  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.149416  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.149580  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.149596  123537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:34.414026  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:34.414058  123537 machine.go:97] duration metric: took 886.536134ms to provisionDockerMachine
	I0316 00:16:34.414070  123537 start.go:293] postStartSetup for "embed-certs-666637" (driver="kvm2")
	I0316 00:16:34.414081  123537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:34.414101  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.414464  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:34.414497  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.417211  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417482  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.417520  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417617  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.417804  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.417990  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.418126  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.498223  123537 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:34.502954  123537 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:34.502989  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:34.503068  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:34.503156  123537 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:34.503258  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:34.513065  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:34.537606  123537 start.go:296] duration metric: took 123.521431ms for postStartSetup
	I0316 00:16:34.537657  123537 fix.go:56] duration metric: took 18.450434099s for fixHost
	I0316 00:16:34.537679  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.540574  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.540908  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.540950  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.541086  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.541302  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541471  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541609  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.541803  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.542009  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.542025  123537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:34.648381  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548194.613058580
	
	I0316 00:16:34.648419  123537 fix.go:216] guest clock: 1710548194.613058580
	I0316 00:16:34.648427  123537 fix.go:229] Guest: 2024-03-16 00:16:34.61305858 +0000 UTC Remote: 2024-03-16 00:16:34.537661993 +0000 UTC m=+286.854063579 (delta=75.396587ms)
	I0316 00:16:34.648454  123537 fix.go:200] guest clock delta is within tolerance: 75.396587ms
	I0316 00:16:34.648459  123537 start.go:83] releasing machines lock for "embed-certs-666637", held for 18.561300744s
	I0316 00:16:34.648483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.648770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:34.651350  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651748  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.651794  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651926  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652573  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652810  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652907  123537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:34.652965  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.653064  123537 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:34.653090  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.655796  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656121  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656149  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656170  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656281  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656461  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.656562  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656586  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656640  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.656739  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656807  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.656883  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.657023  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.657249  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.759596  123537 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:34.765571  123537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:34.915897  123537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:34.923372  123537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:34.923471  123537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:34.940579  123537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:34.940613  123537 start.go:494] detecting cgroup driver to use...
	I0316 00:16:34.940699  123537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:34.957640  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:34.971525  123537 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:34.971598  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:34.987985  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:35.001952  123537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:35.124357  123537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:35.273948  123537 docker.go:233] disabling docker service ...
	I0316 00:16:35.274037  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:35.291073  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:35.311209  123537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:35.460630  123537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:35.581263  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:35.596460  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:35.617992  123537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:35.618042  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.628372  123537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:35.628426  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.639487  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.650397  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.662065  123537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:35.676003  123537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:35.686159  123537 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:35.686241  123537 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:35.699814  123537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:35.710182  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:35.831831  123537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:35.977556  123537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:35.977638  123537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:35.982729  123537 start.go:562] Will wait 60s for crictl version
	I0316 00:16:35.982806  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:16:35.986695  123537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:36.023299  123537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:36.023412  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.055441  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.090313  123537 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:36.091622  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:36.094687  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095062  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:36.095098  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095277  123537 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:36.099781  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:36.113522  123537 kubeadm.go:877] updating cluster {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:36.113674  123537 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:36.113743  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:36.152208  123537 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:36.152300  123537 ssh_runner.go:195] Run: which lz4
	I0316 00:16:36.156802  123537 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:36.161430  123537 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:36.161472  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:35.911510  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting to get IP...
	I0316 00:16:35.912562  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.912986  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.913064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:35.912955  124655 retry.go:31] will retry after 248.147893ms: waiting for machine to come up
	I0316 00:16:36.162476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163094  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163127  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.163032  124655 retry.go:31] will retry after 387.219214ms: waiting for machine to come up
	I0316 00:16:36.551678  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552203  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552236  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.552178  124655 retry.go:31] will retry after 391.385671ms: waiting for machine to come up
	I0316 00:16:36.945741  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946275  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.946216  124655 retry.go:31] will retry after 470.449619ms: waiting for machine to come up
	I0316 00:16:37.417836  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418324  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418353  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.418259  124655 retry.go:31] will retry after 508.962644ms: waiting for machine to come up
	I0316 00:16:37.929194  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929710  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.929671  124655 retry.go:31] will retry after 877.538639ms: waiting for machine to come up
	I0316 00:16:38.808551  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809061  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809100  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:38.809002  124655 retry.go:31] will retry after 754.319242ms: waiting for machine to come up
	I0316 00:16:39.565060  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565475  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565512  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:39.565411  124655 retry.go:31] will retry after 1.472475348s: waiting for machine to come up
	I0316 00:16:37.946470  123537 crio.go:444] duration metric: took 1.789700065s to copy over tarball
	I0316 00:16:37.946552  123537 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:40.497841  123537 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551257887s)
	I0316 00:16:40.497867  123537 crio.go:451] duration metric: took 2.551367803s to extract the tarball
	I0316 00:16:40.497875  123537 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:40.539695  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:40.588945  123537 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:40.588974  123537 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:40.588983  123537 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.28.4 crio true true} ...
	I0316 00:16:40.589125  123537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-666637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:40.589216  123537 ssh_runner.go:195] Run: crio config
	I0316 00:16:40.641673  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:40.641702  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:40.641719  123537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:40.641754  123537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-666637 NodeName:embed-certs-666637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:40.641939  123537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-666637"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:40.642024  123537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:40.652461  123537 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:40.652539  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:40.662114  123537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 00:16:40.679782  123537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:40.701982  123537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0316 00:16:40.720088  123537 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:40.724199  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:40.737133  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:40.860343  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:40.878437  123537 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637 for IP: 192.168.61.91
	I0316 00:16:40.878466  123537 certs.go:194] generating shared ca certs ...
	I0316 00:16:40.878489  123537 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:40.878690  123537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:40.878766  123537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:40.878779  123537 certs.go:256] generating profile certs ...
	I0316 00:16:40.878888  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/client.key
	I0316 00:16:40.878990  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key.07955952
	I0316 00:16:40.879059  123537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key
	I0316 00:16:40.879178  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:40.879225  123537 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:40.879239  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:40.879271  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:40.879302  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:40.879352  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:40.879409  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:40.880141  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:40.924047  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:40.962441  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:41.000283  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:41.034353  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 00:16:41.069315  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:16:41.100325  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:16:41.129285  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:16:41.155899  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:16:41.180657  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:16:41.205961  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:16:41.231886  123537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:16:41.249785  123537 ssh_runner.go:195] Run: openssl version
	I0316 00:16:41.255703  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:16:41.266968  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271536  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271595  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.277460  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:16:41.288854  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:16:41.300302  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305189  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305256  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.311200  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:16:41.322784  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:16:41.334879  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339774  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339837  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.345746  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:16:41.357661  123537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:16:41.362469  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:16:41.368875  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:16:41.375759  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:16:41.382518  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:16:41.388629  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:16:41.394882  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:16:41.401114  123537 kubeadm.go:391] StartCluster: {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:16:41.401243  123537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:16:41.401304  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.449499  123537 cri.go:89] found id: ""
	I0316 00:16:41.449590  123537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:16:41.461139  123537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:16:41.461165  123537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:16:41.461173  123537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:16:41.461243  123537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:16:41.473648  123537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:16:41.474652  123537 kubeconfig.go:125] found "embed-certs-666637" server: "https://192.168.61.91:8443"
	I0316 00:16:41.476724  123537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:16:41.488387  123537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0316 00:16:41.488426  123537 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:16:41.488439  123537 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:16:41.488485  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.526197  123537 cri.go:89] found id: ""
	I0316 00:16:41.526283  123537 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:16:41.545489  123537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:16:41.555977  123537 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:16:41.555998  123537 kubeadm.go:156] found existing configuration files:
	
	I0316 00:16:41.556048  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:16:41.565806  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:16:41.565891  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:16:41.575646  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:16:41.585269  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:16:41.585329  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:16:41.595336  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.605081  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:16:41.605144  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.615182  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:16:41.624781  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:16:41.624837  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:16:41.634852  123537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:16:41.644749  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.748782  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.477775  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.688730  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.039441  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039924  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:41.039885  124655 retry.go:31] will retry after 1.408692905s: waiting for machine to come up
	I0316 00:16:42.449971  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450402  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:42.450355  124655 retry.go:31] will retry after 1.539639877s: waiting for machine to come up
	I0316 00:16:43.992314  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992833  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992869  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:43.992777  124655 retry.go:31] will retry after 2.297369864s: waiting for machine to come up
	I0316 00:16:42.777223  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.944089  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:16:42.944193  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.445082  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.945117  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.963812  123537 api_server.go:72] duration metric: took 1.019723734s to wait for apiserver process to appear ...
	I0316 00:16:43.963845  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:16:43.963871  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.924208  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.924258  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.924278  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.953212  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.953245  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.964449  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.988201  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.988232  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:47.464502  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.469385  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.469421  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:47.964483  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.970448  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.970492  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:48.463984  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:48.468908  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:16:48.476120  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:16:48.476153  123537 api_server.go:131] duration metric: took 4.512298176s to wait for apiserver health ...
	I0316 00:16:48.476164  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:48.476172  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:48.478076  123537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:16:48.479565  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:16:48.490129  123537 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:16:48.516263  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:16:48.532732  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:16:48.532768  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:16:48.532778  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:16:48.532788  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:16:48.532795  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:16:48.532801  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:16:48.532808  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:16:48.532815  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:16:48.532822  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:16:48.532833  123537 system_pods.go:74] duration metric: took 16.547677ms to wait for pod list to return data ...
	I0316 00:16:48.532845  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:16:48.535945  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:16:48.535989  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:16:48.536006  123537 node_conditions.go:105] duration metric: took 3.154184ms to run NodePressure ...
	I0316 00:16:48.536027  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:48.733537  123537 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739166  123537 kubeadm.go:733] kubelet initialised
	I0316 00:16:48.739196  123537 kubeadm.go:734] duration metric: took 5.63118ms waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739209  123537 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:48.744724  123537 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.750261  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750299  123537 pod_ready.go:81] duration metric: took 5.547917ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.750310  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750323  123537 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.755340  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755362  123537 pod_ready.go:81] duration metric: took 5.029639ms for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.755371  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755379  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.761104  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761128  123537 pod_ready.go:81] duration metric: took 5.740133ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.761138  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761146  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.921215  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921244  123537 pod_ready.go:81] duration metric: took 160.08501ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.921254  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921260  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.319922  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319954  123537 pod_ready.go:81] duration metric: took 398.685799ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.319963  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319969  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.720866  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720922  123537 pod_ready.go:81] duration metric: took 400.944023ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.720948  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720967  123537 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:50.120836  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120865  123537 pod_ready.go:81] duration metric: took 399.883676ms for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:50.120875  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120882  123537 pod_ready.go:38] duration metric: took 1.381661602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:50.120923  123537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:16:50.133619  123537 ops.go:34] apiserver oom_adj: -16
	I0316 00:16:50.133653  123537 kubeadm.go:591] duration metric: took 8.672472438s to restartPrimaryControlPlane
	I0316 00:16:50.133663  123537 kubeadm.go:393] duration metric: took 8.732557685s to StartCluster
	I0316 00:16:50.133684  123537 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.133760  123537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:16:50.135355  123537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.135613  123537 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:16:50.140637  123537 out.go:177] * Verifying Kubernetes components...
	I0316 00:16:50.135727  123537 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:16:50.135843  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:50.142015  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:50.142027  123537 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-666637"
	I0316 00:16:50.142050  123537 addons.go:69] Setting default-storageclass=true in profile "embed-certs-666637"
	I0316 00:16:50.142070  123537 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-666637"
	W0316 00:16:50.142079  123537 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:16:50.142090  123537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-666637"
	I0316 00:16:50.142092  123537 addons.go:69] Setting metrics-server=true in profile "embed-certs-666637"
	I0316 00:16:50.142121  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142124  123537 addons.go:234] Setting addon metrics-server=true in "embed-certs-666637"
	W0316 00:16:50.142136  123537 addons.go:243] addon metrics-server should already be in state true
	I0316 00:16:50.142168  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142439  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142468  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142558  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142577  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.156773  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0316 00:16:50.156804  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0316 00:16:50.157267  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157268  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157591  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0316 00:16:50.157835  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157841  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157857  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157858  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157925  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.158223  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158226  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158404  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.158419  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.158731  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158753  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158795  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158828  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158932  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.159126  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.162347  123537 addons.go:234] Setting addon default-storageclass=true in "embed-certs-666637"
	W0316 00:16:50.162365  123537 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:16:50.162392  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.162612  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.162649  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.172299  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0316 00:16:50.172676  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.173173  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.173193  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.173547  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.173770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.175668  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.177676  123537 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:16:50.175968  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0316 00:16:50.176110  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0316 00:16:50.179172  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:16:50.179189  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:16:50.179206  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.179453  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179538  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179888  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.179909  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180021  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.180037  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180266  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180385  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180613  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.180788  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.180811  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.185060  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.192504  123537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:16:46.292804  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293326  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293363  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:46.293267  124655 retry.go:31] will retry after 2.301997121s: waiting for machine to come up
	I0316 00:16:48.596337  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596777  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:48.596731  124655 retry.go:31] will retry after 3.159447069s: waiting for machine to come up
	I0316 00:16:50.186146  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.186717  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.193945  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.193971  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.194051  123537 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.194079  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:16:50.194100  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.194103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.194264  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.194420  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.196511  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0316 00:16:50.197160  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.197580  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.197598  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.197658  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198007  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.198039  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.198038  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198235  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.198237  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.198435  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.198612  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.198772  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.200270  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.200540  123537 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.200554  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:16:50.200566  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.203147  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203634  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.203655  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203765  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.203966  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.204201  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.204335  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.317046  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:50.340203  123537 node_ready.go:35] waiting up to 6m0s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:50.415453  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.423732  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.424648  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:16:50.424663  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:16:50.470134  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:16:50.470164  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:16:50.518806  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:50.518833  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:16:50.570454  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:51.627153  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203388401s)
	I0316 00:16:51.627211  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627222  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627419  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211925303s)
	I0316 00:16:51.627468  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627533  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627595  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627609  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627620  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627549  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627859  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627885  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627895  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627914  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627956  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627976  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.629345  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.633811  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.633831  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.634043  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.634081  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726400  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.15588774s)
	I0316 00:16:51.726458  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726472  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.726820  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.726853  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.726875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726889  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726898  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.727178  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.727193  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.727206  123537 addons.go:470] Verifying addon metrics-server=true in "embed-certs-666637"
	I0316 00:16:51.729277  123537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0316 00:16:51.730645  123537 addons.go:505] duration metric: took 1.594919212s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0316 00:16:52.344107  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:51.757133  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757570  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Found IP for machine: 192.168.72.198
	I0316 00:16:51.757603  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has current primary IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserving static IP address...
	I0316 00:16:51.758067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.758093  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | skip adding static IP to network mk-default-k8s-diff-port-313436 - found existing host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"}
	I0316 00:16:51.758110  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserved static IP address: 192.168.72.198
	I0316 00:16:51.758120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Getting to WaitForSSH function...
	I0316 00:16:51.758138  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for SSH to be available...
	I0316 00:16:51.760276  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760596  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.760632  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760711  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH client type: external
	I0316 00:16:51.760744  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa (-rw-------)
	I0316 00:16:51.760797  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:51.760820  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | About to run SSH command:
	I0316 00:16:51.760861  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | exit 0
	I0316 00:16:51.887432  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:51.887829  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetConfigRaw
	I0316 00:16:51.888471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:51.891514  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.891923  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.891949  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.892232  123819 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:16:51.892502  123819 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:51.892527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:51.892782  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:51.895025  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.895367  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:51.895683  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895841  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:51.896178  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:51.896361  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:51.896372  123819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:52.012107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:52.012154  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012405  123819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-313436"
	I0316 00:16:52.012434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012640  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.015307  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.015823  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.015847  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.016055  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.016266  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016433  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016565  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.016758  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.016976  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.016992  123819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313436 && echo "default-k8s-diff-port-313436" | sudo tee /etc/hostname
	I0316 00:16:52.149152  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313436
	
	I0316 00:16:52.149180  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.152472  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.152852  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.152896  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.153056  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.153239  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153412  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.153837  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.154077  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.154108  123819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:52.285258  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:52.285290  123819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:52.285313  123819 buildroot.go:174] setting up certificates
	I0316 00:16:52.285323  123819 provision.go:84] configureAuth start
	I0316 00:16:52.285331  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.285631  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:52.288214  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288494  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.288527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288699  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.290965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291354  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.291380  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291571  123819 provision.go:143] copyHostCerts
	I0316 00:16:52.291644  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:52.291658  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:52.291719  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:52.291827  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:52.291839  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:52.291868  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:52.291966  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:52.291978  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:52.292005  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:52.292095  123819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313436 san=[127.0.0.1 192.168.72.198 default-k8s-diff-port-313436 localhost minikube]
	I0316 00:16:52.536692  123819 provision.go:177] copyRemoteCerts
	I0316 00:16:52.536756  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:52.536790  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.539525  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.539805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.539837  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.540067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.540264  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.540424  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.540599  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:52.629139  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:52.655092  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0316 00:16:52.681372  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:52.706496  123819 provision.go:87] duration metric: took 421.160351ms to configureAuth
	I0316 00:16:52.706529  123819 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:52.706737  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:52.706828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.709743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710173  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.710198  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710403  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.710616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710822  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710983  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.711148  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.711359  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.711380  123819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:53.005107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:53.005138  123819 machine.go:97] duration metric: took 1.112619102s to provisionDockerMachine
	I0316 00:16:53.005153  123819 start.go:293] postStartSetup for "default-k8s-diff-port-313436" (driver="kvm2")
	I0316 00:16:53.005166  123819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:53.005185  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.005547  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:53.005581  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.008749  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009170  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.009196  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009416  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.009617  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.009795  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.009973  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.100468  123819 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:53.105158  123819 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:53.105181  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:53.105243  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:53.105314  123819 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:53.105399  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:53.116078  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:53.142400  123819 start.go:296] duration metric: took 137.231635ms for postStartSetup
	I0316 00:16:53.142454  123819 fix.go:56] duration metric: took 18.493815855s for fixHost
	I0316 00:16:53.142483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.145282  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145658  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.145688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145878  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.146104  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146288  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146445  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.146625  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:53.146820  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:53.146834  123819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:53.260232  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548213.237261690
	
	I0316 00:16:53.260255  123819 fix.go:216] guest clock: 1710548213.237261690
	I0316 00:16:53.260262  123819 fix.go:229] Guest: 2024-03-16 00:16:53.23726169 +0000 UTC Remote: 2024-03-16 00:16:53.142460792 +0000 UTC m=+262.706636561 (delta=94.800898ms)
	I0316 00:16:53.260292  123819 fix.go:200] guest clock delta is within tolerance: 94.800898ms
	I0316 00:16:53.260298  123819 start.go:83] releasing machines lock for "default-k8s-diff-port-313436", held for 18.611697781s
	I0316 00:16:53.260323  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.260629  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:53.263641  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264002  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.264032  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.264889  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265217  123819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:53.265273  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.265404  123819 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:53.265434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.268274  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268538  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268684  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268727  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.268969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268995  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.269113  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269206  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.269298  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269419  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.269476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269572  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.372247  123819 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:53.378643  123819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:53.527036  123819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:53.534220  123819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:53.534312  123819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:53.554856  123819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:53.554900  123819 start.go:494] detecting cgroup driver to use...
	I0316 00:16:53.554971  123819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:53.580723  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:53.599919  123819 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:53.599996  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:53.613989  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:53.628748  123819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:53.745409  123819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:53.906668  123819 docker.go:233] disabling docker service ...
	I0316 00:16:53.906733  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:53.928452  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:53.949195  123819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:54.118868  123819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:54.250006  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:54.264754  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:54.285825  123819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:54.285890  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.298522  123819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:54.298590  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.311118  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.323928  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.336128  123819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:54.348715  123819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:54.359657  123819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:54.359718  123819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:54.376411  123819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:54.388136  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:54.530444  123819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:54.681895  123819 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:54.681984  123819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:54.687334  123819 start.go:562] Will wait 60s for crictl version
	I0316 00:16:54.687398  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:16:54.691443  123819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:54.730408  123819 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:54.730505  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.761591  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.792351  123819 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:54.793693  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:54.797023  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797439  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:54.797471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797665  123819 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:54.802065  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:54.815168  123819 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:54.815285  123819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:54.815345  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:54.855493  123819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:54.855553  123819 ssh_runner.go:195] Run: which lz4
	I0316 00:16:54.860096  123819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:54.865644  123819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:54.865675  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:54.345117  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:56.346342  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:57.346164  123537 node_ready.go:49] node "embed-certs-666637" has status "Ready":"True"
	I0316 00:16:57.346194  123537 node_ready.go:38] duration metric: took 7.005950923s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:57.346207  123537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:57.361331  123537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377726  123537 pod_ready.go:92] pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace has status "Ready":"True"
	I0316 00:16:57.377750  123537 pod_ready.go:81] duration metric: took 16.388353ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377760  123537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:16:56.676506  123819 crio.go:444] duration metric: took 1.816442841s to copy over tarball
	I0316 00:16:56.676609  123819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:59.338617  123819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661966532s)
	I0316 00:16:59.338655  123819 crio.go:451] duration metric: took 2.662115388s to extract the tarball
	I0316 00:16:59.338665  123819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:59.387693  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:59.453534  123819 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:59.453565  123819 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:59.453575  123819 kubeadm.go:928] updating node { 192.168.72.198 8444 v1.28.4 crio true true} ...
	I0316 00:16:59.453744  123819 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-313436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:59.453841  123819 ssh_runner.go:195] Run: crio config
	I0316 00:16:59.518492  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:16:59.518525  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:59.518543  123819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:59.518572  123819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.198 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313436 NodeName:default-k8s-diff-port-313436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:59.518791  123819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.198
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313436"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:59.518876  123819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:59.529778  123819 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:59.529860  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:59.542186  123819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0316 00:16:59.563037  123819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:59.585167  123819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 00:16:59.607744  123819 ssh_runner.go:195] Run: grep 192.168.72.198	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:59.612687  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:59.628607  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:59.767487  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:59.786494  123819 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436 for IP: 192.168.72.198
	I0316 00:16:59.786520  123819 certs.go:194] generating shared ca certs ...
	I0316 00:16:59.786545  123819 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:59.786688  123819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:59.786722  123819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:59.786728  123819 certs.go:256] generating profile certs ...
	I0316 00:16:59.786827  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.key
	I0316 00:16:59.786975  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key.254d5830
	I0316 00:16:59.787049  123819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key
	I0316 00:16:59.787204  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:59.787248  123819 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:59.787262  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:59.787295  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:59.787351  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:59.787386  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:59.787449  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:59.788288  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:59.824257  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:59.859470  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:59.904672  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:59.931832  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0316 00:16:59.965654  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:00.006949  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:00.039120  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:00.071341  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:00.095585  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:00.122165  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:00.149982  123819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:00.170019  123819 ssh_runner.go:195] Run: openssl version
	I0316 00:17:00.176232  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:00.188738  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193708  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193780  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.200433  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:00.215116  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:00.228871  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234074  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234141  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.240553  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:00.252454  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:00.264690  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269493  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269573  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.275584  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:00.287859  123819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:00.292474  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:00.298744  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:00.304793  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:00.311156  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:00.317777  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:00.324148  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:00.330667  123819 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:00.330763  123819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:00.330813  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.374868  123819 cri.go:89] found id: ""
	I0316 00:17:00.374961  123819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:00.386218  123819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:00.386240  123819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:00.386245  123819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:00.386288  123819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:00.397129  123819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:00.398217  123819 kubeconfig.go:125] found "default-k8s-diff-port-313436" server: "https://192.168.72.198:8444"
	I0316 00:17:00.400506  123819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:00.411430  123819 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.198
	I0316 00:17:00.411462  123819 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:00.411477  123819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:00.411528  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.448545  123819 cri.go:89] found id: ""
	I0316 00:17:00.448619  123819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:00.469230  123819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:00.480622  123819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:00.480644  123819 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:00.480695  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0316 00:16:59.384420  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.094272  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.390117  123537 pod_ready.go:92] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.390145  123537 pod_ready.go:81] duration metric: took 5.012377671s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.390156  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398207  123537 pod_ready.go:92] pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.398236  123537 pod_ready.go:81] duration metric: took 8.071855ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398248  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405415  123537 pod_ready.go:92] pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.405443  123537 pod_ready.go:81] duration metric: took 7.186495ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405453  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412646  123537 pod_ready.go:92] pod "kube-proxy-8fpc5" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.412665  123537 pod_ready.go:81] duration metric: took 7.204465ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412673  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606336  123537 pod_ready.go:92] pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.606369  123537 pod_ready.go:81] duration metric: took 193.687951ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606384  123537 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:00.492088  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:00.743504  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:00.756322  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0316 00:17:00.766476  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:00.766545  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:00.776849  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.786610  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:00.786676  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.797455  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0316 00:17:00.808026  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:00.808083  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:00.819306  123819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:00.834822  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:00.962203  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.535753  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.762322  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.843195  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.944855  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:01.944971  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.446047  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.945791  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.983641  123819 api_server.go:72] duration metric: took 1.038786332s to wait for apiserver process to appear ...
	I0316 00:17:02.983680  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:02.983704  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:04.615157  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:07.114447  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:06.343729  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.343763  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.343786  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.364621  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.364659  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.483852  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.491403  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.491433  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:06.983931  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.994258  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.994296  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.483821  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.506265  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:07.506301  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.983846  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.988700  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:17:07.995996  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:17:07.996021  123819 api_server.go:131] duration metric: took 5.012333318s to wait for apiserver health ...
	I0316 00:17:07.996032  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:17:07.996041  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:07.998091  123819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:17:07.999628  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:17:08.010263  123819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:17:08.041667  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:17:08.053611  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:17:08.053656  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:17:08.053668  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:17:08.053681  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:17:08.053694  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:17:08.053706  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:17:08.053717  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:17:08.053730  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:17:08.053739  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:17:08.053747  123819 system_pods.go:74] duration metric: took 12.054433ms to wait for pod list to return data ...
	I0316 00:17:08.053763  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:17:08.057781  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:17:08.057808  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:17:08.057818  123819 node_conditions.go:105] duration metric: took 4.047698ms to run NodePressure ...
	I0316 00:17:08.057837  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:08.282870  123819 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288338  123819 kubeadm.go:733] kubelet initialised
	I0316 00:17:08.288359  123819 kubeadm.go:734] duration metric: took 5.456436ms waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288367  123819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:08.294256  123819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.302762  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302802  123819 pod_ready.go:81] duration metric: took 8.523485ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.302814  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302823  123819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.309581  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309604  123819 pod_ready.go:81] duration metric: took 6.77179ms for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.309617  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309625  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.315399  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315419  123819 pod_ready.go:81] duration metric: took 5.78558ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.315428  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315434  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.445776  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445808  123819 pod_ready.go:81] duration metric: took 130.363739ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.445821  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445829  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.846181  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846228  123819 pod_ready.go:81] duration metric: took 400.382095ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.846243  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846251  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.245568  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245599  123819 pod_ready.go:81] duration metric: took 399.329058ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.245612  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245618  123819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.646855  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646888  123819 pod_ready.go:81] duration metric: took 401.262603ms for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.646901  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646909  123819 pod_ready.go:38] duration metric: took 1.358531936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:09.646926  123819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:17:09.659033  123819 ops.go:34] apiserver oom_adj: -16
	I0316 00:17:09.659059  123819 kubeadm.go:591] duration metric: took 9.272806311s to restartPrimaryControlPlane
	I0316 00:17:09.659070  123819 kubeadm.go:393] duration metric: took 9.328414192s to StartCluster
	I0316 00:17:09.659091  123819 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.659166  123819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:09.661439  123819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.661729  123819 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:17:09.663462  123819 out.go:177] * Verifying Kubernetes components...
	I0316 00:17:09.661800  123819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:17:09.661986  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:17:09.664841  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:09.664874  123819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664839  123819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664964  123819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.664980  123819 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:17:09.664847  123819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.665023  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.665037  123819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.665053  123819 addons.go:243] addon metrics-server should already be in state true
	I0316 00:17:09.665084  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.664922  123819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313436"
	I0316 00:17:09.665349  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665377  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665445  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665474  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665607  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665637  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.680337  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0316 00:17:09.680351  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0316 00:17:09.680799  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.680939  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.681331  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681366  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681541  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681560  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681736  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.681974  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.682359  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682407  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.682461  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682494  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.683660  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0316 00:17:09.684088  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.684575  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.684600  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.684992  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.685218  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.688973  123819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.688994  123819 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:17:09.689028  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.689372  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.689397  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.698126  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0316 00:17:09.698527  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.699052  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.699079  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.699407  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.699606  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.700389  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0316 00:17:09.700824  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.701308  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.701327  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.701610  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.701681  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.704168  123819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:17:09.701891  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.704403  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0316 00:17:09.706042  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:17:09.706076  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:17:09.706102  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.706988  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.707805  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.707831  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.708465  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.708556  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.709451  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.709500  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.709520  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.711354  123819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:09.709911  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.710103  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.712849  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.712865  123819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:09.712886  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:17:09.712910  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.713010  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.713202  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.713365  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.715688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716029  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.716064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716260  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.716437  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.716662  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.716826  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.725309  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0316 00:17:09.725659  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.726175  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.726191  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.726492  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.726665  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.728459  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.728721  123819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.728739  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:17:09.728753  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.732122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732546  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.732576  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732733  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.732908  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.733064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.733206  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.838182  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:09.857248  123819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:09.956751  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:17:09.956775  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:17:09.982142  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.992293  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:17:09.992319  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:17:10.000878  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:10.035138  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:10.035171  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:17:10.066721  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:11.153759  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171576504s)
	I0316 00:17:11.153815  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.153828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154237  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154241  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154262  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.154271  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.154281  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154569  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154601  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154609  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165531  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.165579  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.165868  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.165922  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165879  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536530  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.469764101s)
	I0316 00:17:11.536596  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536607  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536648  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53572281s)
	I0316 00:17:11.536694  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536713  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536963  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536988  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536995  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537001  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537005  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537010  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537013  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537019  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537218  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537365  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537376  123819 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-313436"
	I0316 00:17:11.537404  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537425  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.539481  123819 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0316 00:17:09.114699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:11.613507  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:13.204814  123454 start.go:364] duration metric: took 52.116735477s to acquireMachinesLock for "no-preload-238598"
	I0316 00:17:13.204888  123454 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:17:13.204900  123454 fix.go:54] fixHost starting: 
	I0316 00:17:13.205405  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:13.205446  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:13.222911  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0316 00:17:13.223326  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:13.223784  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:17:13.223811  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:13.224153  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:13.224338  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:13.224507  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:17:13.226028  123454 fix.go:112] recreateIfNeeded on no-preload-238598: state=Stopped err=<nil>
	I0316 00:17:13.226051  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	W0316 00:17:13.226232  123454 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:17:13.227865  123454 out.go:177] * Restarting existing kvm2 VM for "no-preload-238598" ...
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:11.540876  123819 addons.go:505] duration metric: took 1.87908534s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0316 00:17:11.862772  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.866333  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.229181  123454 main.go:141] libmachine: (no-preload-238598) Calling .Start
	I0316 00:17:13.229409  123454 main.go:141] libmachine: (no-preload-238598) Ensuring networks are active...
	I0316 00:17:13.230257  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network default is active
	I0316 00:17:13.230618  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network mk-no-preload-238598 is active
	I0316 00:17:13.231135  123454 main.go:141] libmachine: (no-preload-238598) Getting domain xml...
	I0316 00:17:13.232023  123454 main.go:141] libmachine: (no-preload-238598) Creating domain...
	I0316 00:17:14.513800  123454 main.go:141] libmachine: (no-preload-238598) Waiting to get IP...
	I0316 00:17:14.514838  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.515446  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.515520  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.515407  125029 retry.go:31] will retry after 275.965955ms: waiting for machine to come up
	I0316 00:17:14.793095  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.793594  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.793721  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.793667  125029 retry.go:31] will retry after 347.621979ms: waiting for machine to come up
	I0316 00:17:15.143230  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.143869  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.143909  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.143820  125029 retry.go:31] will retry after 301.441766ms: waiting for machine to come up
	I0316 00:17:15.446476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.446917  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.446964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.446865  125029 retry.go:31] will retry after 431.207345ms: waiting for machine to come up
	I0316 00:17:13.615911  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.616381  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:17.618352  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:16.362143  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:16.866488  123819 node_ready.go:49] node "default-k8s-diff-port-313436" has status "Ready":"True"
	I0316 00:17:16.866522  123819 node_ready.go:38] duration metric: took 7.00923342s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:16.866535  123819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:16.881909  123819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897574  123819 pod_ready.go:92] pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:16.897617  123819 pod_ready.go:81] duration metric: took 15.618728ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897630  123819 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:18.910740  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.879693  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.880186  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.880222  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.880148  125029 retry.go:31] will retry after 747.650888ms: waiting for machine to come up
	I0316 00:17:16.629378  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:16.631312  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:16.631352  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:16.631193  125029 retry.go:31] will retry after 670.902171ms: waiting for machine to come up
	I0316 00:17:17.304282  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:17.304704  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:17.304751  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:17.304658  125029 retry.go:31] will retry after 1.160879196s: waiting for machine to come up
	I0316 00:17:18.466662  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:18.467103  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:18.467136  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:18.467049  125029 retry.go:31] will retry after 948.597188ms: waiting for machine to come up
	I0316 00:17:19.417144  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:19.417623  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:19.417657  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:19.417561  125029 retry.go:31] will retry after 1.263395738s: waiting for machine to come up
	I0316 00:17:20.289713  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.613643  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:21.865146  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.241535  123819 pod_ready.go:92] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.241561  123819 pod_ready.go:81] duration metric: took 5.34392174s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.241573  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247469  123819 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.247501  123819 pod_ready.go:81] duration metric: took 5.919787ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247515  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756151  123819 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.756180  123819 pod_ready.go:81] duration metric: took 508.652978ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756194  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762214  123819 pod_ready.go:92] pod "kube-proxy-btmmm" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.762254  123819 pod_ready.go:81] duration metric: took 6.041426ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762268  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769644  123819 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.769668  123819 pod_ready.go:81] duration metric: took 7.391813ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769681  123819 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:24.780737  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.682443  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:20.798804  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:20.798840  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:20.682821  125029 retry.go:31] will retry after 1.834378571s: waiting for machine to come up
	I0316 00:17:22.518539  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:22.518997  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:22.519027  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:22.518945  125029 retry.go:31] will retry after 1.944866033s: waiting for machine to come up
	I0316 00:17:24.466332  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:24.466902  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:24.466930  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:24.466847  125029 retry.go:31] will retry after 3.4483736s: waiting for machine to come up
	I0316 00:17:24.615642  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.113920  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.278017  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:29.777128  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.919457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:27.919931  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:27.919964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:27.919891  125029 retry.go:31] will retry after 3.122442649s: waiting for machine to come up
	I0316 00:17:29.613500  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.613674  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.276855  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:34.277228  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.044512  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:31.044939  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:31.044970  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:31.044884  125029 retry.go:31] will retry after 4.529863895s: waiting for machine to come up
	I0316 00:17:34.112266  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:36.118023  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:35.576311  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.576834  123454 main.go:141] libmachine: (no-preload-238598) Found IP for machine: 192.168.50.137
	I0316 00:17:35.576858  123454 main.go:141] libmachine: (no-preload-238598) Reserving static IP address...
	I0316 00:17:35.576875  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has current primary IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.577312  123454 main.go:141] libmachine: (no-preload-238598) Reserved static IP address: 192.168.50.137
	I0316 00:17:35.577355  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.577365  123454 main.go:141] libmachine: (no-preload-238598) Waiting for SSH to be available...
	I0316 00:17:35.577404  123454 main.go:141] libmachine: (no-preload-238598) DBG | skip adding static IP to network mk-no-preload-238598 - found existing host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"}
	I0316 00:17:35.577419  123454 main.go:141] libmachine: (no-preload-238598) DBG | Getting to WaitForSSH function...
	I0316 00:17:35.579640  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580061  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.580108  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580210  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH client type: external
	I0316 00:17:35.580269  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa (-rw-------)
	I0316 00:17:35.580303  123454 main.go:141] libmachine: (no-preload-238598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:35.580319  123454 main.go:141] libmachine: (no-preload-238598) DBG | About to run SSH command:
	I0316 00:17:35.580339  123454 main.go:141] libmachine: (no-preload-238598) DBG | exit 0
	I0316 00:17:35.711373  123454 main.go:141] libmachine: (no-preload-238598) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:35.711791  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetConfigRaw
	I0316 00:17:35.712598  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:35.715455  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.715929  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.715954  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.716326  123454 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:17:35.716525  123454 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:35.716551  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:35.716802  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.719298  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719612  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.719644  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719780  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.720005  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720178  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720315  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.720487  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.720666  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.720677  123454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:35.835733  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:35.835760  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836004  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:17:35.836033  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836240  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.839024  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839413  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.839445  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839627  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.839811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.839977  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.840133  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.840279  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.840485  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.840504  123454 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-238598 && echo "no-preload-238598" | sudo tee /etc/hostname
	I0316 00:17:35.976590  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-238598
	
	I0316 00:17:35.976624  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.979354  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979689  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.979720  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979879  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.980104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980267  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980445  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.980602  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.980796  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.980815  123454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-238598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-238598/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-238598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:36.106710  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:36.106750  123454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:36.106774  123454 buildroot.go:174] setting up certificates
	I0316 00:17:36.106786  123454 provision.go:84] configureAuth start
	I0316 00:17:36.106800  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:36.107104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.110050  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110431  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.110476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110592  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.113019  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113366  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.113391  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113517  123454 provision.go:143] copyHostCerts
	I0316 00:17:36.113595  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:36.113619  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:36.113699  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:36.113898  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:36.113911  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:36.113964  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:36.114051  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:36.114063  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:36.114089  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:36.114155  123454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.no-preload-238598 san=[127.0.0.1 192.168.50.137 localhost minikube no-preload-238598]
	I0316 00:17:36.239622  123454 provision.go:177] copyRemoteCerts
	I0316 00:17:36.239706  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:36.239736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.242440  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.242806  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.242841  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.243086  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.243279  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.243482  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.243623  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.330601  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:36.359600  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:17:36.384258  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:36.409195  123454 provision.go:87] duration metric: took 302.39571ms to configureAuth
	I0316 00:17:36.409239  123454 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:36.409440  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:17:36.409539  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.412280  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412618  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.412652  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.413039  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413217  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413366  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.413576  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.413803  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.413823  123454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:36.703300  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:36.703365  123454 machine.go:97] duration metric: took 986.82471ms to provisionDockerMachine
	I0316 00:17:36.703418  123454 start.go:293] postStartSetup for "no-preload-238598" (driver="kvm2")
	I0316 00:17:36.703440  123454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:36.703474  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.703838  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:36.703880  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.706655  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707019  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.707057  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707237  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.707470  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.707626  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.707822  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.794605  123454 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:36.799121  123454 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:36.799151  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:36.799222  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:36.799298  123454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:36.799423  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:36.808805  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:36.834244  123454 start.go:296] duration metric: took 130.803052ms for postStartSetup
	I0316 00:17:36.834290  123454 fix.go:56] duration metric: took 23.629390369s for fixHost
	I0316 00:17:36.834318  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.837197  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837643  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.837684  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837926  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.838155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838360  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838533  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.838721  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.838965  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.838982  123454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:36.956309  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548256.900043121
	
	I0316 00:17:36.956352  123454 fix.go:216] guest clock: 1710548256.900043121
	I0316 00:17:36.956366  123454 fix.go:229] Guest: 2024-03-16 00:17:36.900043121 +0000 UTC Remote: 2024-03-16 00:17:36.83429667 +0000 UTC m=+356.318603082 (delta=65.746451ms)
	I0316 00:17:36.956398  123454 fix.go:200] guest clock delta is within tolerance: 65.746451ms
	I0316 00:17:36.956425  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 23.751563248s
	I0316 00:17:36.956472  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.956736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.960077  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960494  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.960524  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960678  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961247  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961454  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961522  123454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:36.961588  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.961730  123454 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:36.961756  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.964457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964801  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.964834  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964905  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965346  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965374  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.965406  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965518  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.965609  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965681  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.965739  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965866  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.966034  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:37.077559  123454 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:37.084485  123454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:37.229503  123454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:37.236783  123454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:37.236862  123454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:37.255248  123454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:37.255275  123454 start.go:494] detecting cgroup driver to use...
	I0316 00:17:37.255377  123454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:37.272795  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:37.289822  123454 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:37.289885  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:37.306082  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:37.322766  123454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:37.448135  123454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:37.614316  123454 docker.go:233] disabling docker service ...
	I0316 00:17:37.614381  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:37.630091  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:37.645025  123454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:37.773009  123454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:37.891459  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:37.906829  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:37.927910  123454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:17:37.927982  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.939166  123454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:37.939226  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.950487  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.961547  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.972402  123454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:37.983413  123454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:37.993080  123454 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:37.993147  123454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:38.007746  123454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:38.017917  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:38.158718  123454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:38.329423  123454 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:38.329520  123454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:38.334518  123454 start.go:562] Will wait 60s for crictl version
	I0316 00:17:38.334570  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.338570  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:38.375688  123454 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:38.375779  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.408167  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.444754  123454 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.277480  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.281375  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.446078  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:38.448885  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449299  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:38.449329  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449565  123454 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:38.453922  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:38.467515  123454 kubeadm.go:877] updating cluster {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:38.467646  123454 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:17:38.467690  123454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:38.511057  123454 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:17:38.511093  123454 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:38.511189  123454 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.511221  123454 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0316 00:17:38.511240  123454 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.511253  123454 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.511305  123454 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.511335  123454 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.511338  123454 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.511188  123454 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.512934  123454 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.512949  123454 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.512953  123454 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0316 00:17:38.513014  123454 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.648129  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.650306  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.661334  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0316 00:17:38.666656  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.669280  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.684494  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.690813  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.760339  123454 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0316 00:17:38.760396  123454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.760449  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.760545  123454 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0316 00:17:38.760585  123454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.760641  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908463  123454 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0316 00:17:38.908491  123454 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0316 00:17:38.908515  123454 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.908525  123454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908579  123454 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0316 00:17:38.908607  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.908615  123454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.908585  123454 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908638  123454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.908739  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.954587  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.954611  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.954699  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.961857  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.961878  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0316 00:17:38.961979  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:38.962005  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.962010  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:39.052859  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.052888  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0316 00:17:39.052907  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.052958  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.052976  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.053001  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0316 00:17:39.052963  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.053055  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.053060  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0316 00:17:39.053100  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:39.053156  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.053235  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.120914  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.612614  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.779012  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:43.278631  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:41.133735  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.080597621s)
	I0316 00:17:41.133778  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0316 00:17:41.133890  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.080807025s)
	I0316 00:17:41.133924  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0316 00:17:41.133942  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.08085981s)
	I0316 00:17:41.133972  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133978  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.080988823s)
	I0316 00:17:41.133993  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133948  123454 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134011  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.080758975s)
	I0316 00:17:41.134031  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0316 00:17:41.134032  123454 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.01309054s)
	I0316 00:17:41.134060  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134083  123454 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0316 00:17:41.134110  123454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:41.134160  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:43.198894  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.064808781s)
	I0316 00:17:43.198926  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0316 00:17:43.198952  123454 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.198951  123454 ssh_runner.go:235] Completed: which crictl: (2.064761171s)
	I0316 00:17:43.199004  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.199051  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:43.112939  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.114446  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.613592  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.776235  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.777686  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.278307  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.110501  123454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.911421102s)
	I0316 00:17:47.110567  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0316 00:17:47.110695  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.911660704s)
	I0316 00:17:47.110728  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0316 00:17:47.110751  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:47.110703  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:47.110802  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:49.585079  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.474253503s)
	I0316 00:17:49.585109  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0316 00:17:49.585130  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.474308112s)
	I0316 00:17:49.585160  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0316 00:17:49.585134  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.585220  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.613704  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.615227  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:54.780467  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.736360  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.151102687s)
	I0316 00:17:51.736402  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0316 00:17:51.736463  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:51.736535  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:54.214591  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477993231s)
	I0316 00:17:54.214629  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0316 00:17:54.214658  123454 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:54.214728  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:55.171123  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0316 00:17:55.171204  123454 cache_images.go:123] Successfully loaded all cached images
	I0316 00:17:55.171213  123454 cache_images.go:92] duration metric: took 16.660103091s to LoadCachedImages
	I0316 00:17:55.171233  123454 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:17:55.171506  123454 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-238598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:55.171617  123454 ssh_runner.go:195] Run: crio config
	I0316 00:17:55.225056  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:17:55.225078  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:55.225089  123454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:55.225110  123454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-238598 NodeName:no-preload-238598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:17:55.225278  123454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-238598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:55.225371  123454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:17:55.237834  123454 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:55.237896  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:55.248733  123454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 00:17:55.266587  123454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:17:55.285283  123454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0316 00:17:55.303384  123454 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:55.307384  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:55.321079  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:55.453112  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:55.470573  123454 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598 for IP: 192.168.50.137
	I0316 00:17:55.470600  123454 certs.go:194] generating shared ca certs ...
	I0316 00:17:55.470623  123454 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:55.470808  123454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:55.470868  123454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:55.470906  123454 certs.go:256] generating profile certs ...
	I0316 00:17:55.471028  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.key
	I0316 00:17:55.471140  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key.0f2ae39d
	I0316 00:17:55.471195  123454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key
	I0316 00:17:55.471410  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:55.471463  123454 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:55.471483  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:55.471515  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:55.471542  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:55.471568  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:55.471612  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:55.472267  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:55.517524  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:54.115678  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:56.613196  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.277553  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:59.277770  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.567992  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:55.601463  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:55.637956  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:17:55.670063  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:55.694990  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:55.718916  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:17:55.744124  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:55.770051  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:55.794846  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:55.819060  123454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:55.836991  123454 ssh_runner.go:195] Run: openssl version
	I0316 00:17:55.844665  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:55.857643  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862493  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862561  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.868430  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:55.880551  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:55.891953  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896627  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896687  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.902539  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:55.915215  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:55.926699  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931120  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931172  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.936791  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:55.948180  123454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:55.953021  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:55.959107  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:55.965018  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:55.971159  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:55.977069  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:55.983062  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:55.989119  123454 kubeadm.go:391] StartCluster: {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:55.989201  123454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:55.989254  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.029128  123454 cri.go:89] found id: ""
	I0316 00:17:56.029209  123454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:56.040502  123454 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:56.040525  123454 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:56.040531  123454 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:56.040577  123454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:56.051843  123454 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:56.052995  123454 kubeconfig.go:125] found "no-preload-238598" server: "https://192.168.50.137:8443"
	I0316 00:17:56.055273  123454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:56.066493  123454 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0316 00:17:56.066547  123454 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:56.066564  123454 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:56.066641  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.111015  123454 cri.go:89] found id: ""
	I0316 00:17:56.111110  123454 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:56.131392  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:56.142638  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:56.142665  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:56.142725  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:56.154318  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:56.154418  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:56.166011  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:56.176688  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:56.176752  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:56.187776  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.198216  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:56.198285  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.208661  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:56.218587  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:56.218655  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:56.230247  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:56.241302  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:56.361423  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.731067  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.369591288s)
	I0316 00:17:57.731101  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.952457  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.044540  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.179796  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:58.179894  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.680635  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.180617  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.205383  123454 api_server.go:72] duration metric: took 1.025590775s to wait for apiserver process to appear ...
	I0316 00:17:59.205411  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:59.205436  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:59.205935  123454 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0316 00:17:59.706543  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:58.613340  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:00.618869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:01.914835  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.914865  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:01.914879  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:01.972138  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.972173  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:02.206540  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.219111  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.219165  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:02.705639  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.709820  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.709850  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:03.206513  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:03.216320  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:18:03.224237  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:18:03.224263  123454 api_server.go:131] duration metric: took 4.018845389s to wait for apiserver health ...
	I0316 00:18:03.224272  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:18:03.224279  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:18:03.225951  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.777309  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.777625  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.227382  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:18:03.245892  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:18:03.267423  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:18:03.281349  123454 system_pods.go:59] 8 kube-system pods found
	I0316 00:18:03.281387  123454 system_pods.go:61] "coredns-76f75df574-d2f6z" [3cd22981-0f83-4a60-9930-c103cfc2d2ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:18:03.281397  123454 system_pods.go:61] "etcd-no-preload-238598" [d98fa5b6-ad24-4c90-98c8-9e5b8f1a3250] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:18:03.281408  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [e7d7a5a0-9a4f-4df2-aaf7-44c36e5bd313] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:18:03.281420  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [a198865e-0ed5-40b6-8b10-a4fccdefa059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:18:03.281434  123454 system_pods.go:61] "kube-proxy-cjhzn" [6529873c-cb9d-42d8-991d-e450783b1707] Running
	I0316 00:18:03.281443  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [bfb373fb-ec78-4ef1-b92e-3a8af3f805a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:18:03.281457  123454 system_pods.go:61] "metrics-server-57f55c9bc5-hffvp" [4181fe7f-3e95-455b-a744-8f4dca7b870d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:18:03.281466  123454 system_pods.go:61] "storage-provisioner" [d568ae10-7b9c-4c98-8263-a09505227ac7] Running
	I0316 00:18:03.281485  123454 system_pods.go:74] duration metric: took 14.043103ms to wait for pod list to return data ...
	I0316 00:18:03.281501  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:18:03.284899  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:18:03.284923  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:18:03.284934  123454 node_conditions.go:105] duration metric: took 3.425812ms to run NodePressure ...
	I0316 00:18:03.284955  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:18:03.562930  123454 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568376  123454 kubeadm.go:733] kubelet initialised
	I0316 00:18:03.568402  123454 kubeadm.go:734] duration metric: took 5.44437ms waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568412  123454 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:18:03.574420  123454 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:03.113622  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.613724  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:07.614087  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.278238  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.776236  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.582284  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.081679  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.082343  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.113282  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.114515  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.776835  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.777258  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.778115  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.582099  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:13.082243  123454 pod_ready.go:92] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:13.082263  123454 pod_ready.go:81] duration metric: took 9.507817974s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:13.082271  123454 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:15.088733  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.613599  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:16.614876  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.280289  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.777434  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:17.089800  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.092413  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.092441  123454 pod_ready.go:81] duration metric: took 6.010161958s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.092453  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.097972  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.097996  123454 pod_ready.go:81] duration metric: took 5.533097ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.098008  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102186  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.102204  123454 pod_ready.go:81] duration metric: took 4.187939ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102213  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106692  123454 pod_ready.go:92] pod "kube-proxy-cjhzn" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.106712  123454 pod_ready.go:81] duration metric: took 4.492665ms for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106720  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111735  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.111754  123454 pod_ready.go:81] duration metric: took 5.027601ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111764  123454 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.113278  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.114061  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:22.276633  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:24.278807  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.119790  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.618664  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.115414  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.613572  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:26.778891  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:29.277585  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.619282  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.118484  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.121236  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.114043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.119153  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.614043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:31.778203  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.276424  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.618082  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.619339  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.614209  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.113521  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:36.279218  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:38.779161  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.118552  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.619543  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.614042  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.113784  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:41.278664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:43.777450  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.119118  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.119473  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.614102  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:47.112496  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:46.277664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.279095  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:46.619201  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.619302  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:49.113616  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.613449  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:18:50.777409  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:52.779497  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.278072  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.119041  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:53.121052  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:54.113699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:56.613686  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:57.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.277696  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.618835  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.118984  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.119379  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.614207  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:01.113795  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:02.281155  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.779663  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:02.618637  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.619492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:03.613777  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.114458  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:07.276601  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.277239  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.619784  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.118699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:08.613361  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.615062  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:11.277319  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.777280  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:11.119614  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.618997  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.113490  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:15.613530  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:17.613578  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.276204  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.277156  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.118717  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.618005  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:19.614161  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.112808  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:20.777843  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.778609  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.780571  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:20.618505  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.619290  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.118778  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.113901  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:26.115541  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:27.277159  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:29.277242  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:27.618996  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:30.118650  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:28.614101  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.114366  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:31.776661  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.778372  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:32.125130  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:34.619153  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.114785  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.116692  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:37.613605  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:36.276574  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.276784  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:36.619780  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.619966  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:39.614178  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.616246  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:40.779366  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.277656  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.279201  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.118560  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.120706  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:44.113022  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:46.114296  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.778494  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.277998  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.619070  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.622001  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.118739  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:48.114952  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.614794  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:52.776113  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.777687  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:52.119145  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.619675  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:53.113139  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:55.113961  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.613751  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.277412  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.277555  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.119685  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.618622  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.614914  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:02.113286  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:01.777542  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.278277  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:01.618756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.119973  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.113918  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.613434  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:06.278976  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.778022  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.124642  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.618968  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.613517  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.613699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.613997  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:11.277492  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.777429  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.619721  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.120185  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.114540  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:17.614281  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.781621  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.277078  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.277734  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.620224  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.118862  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.118920  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.117088  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.614917  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:22.779251  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.276842  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.118990  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:24.119699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.114563  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.614869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.277136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.777082  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:26.619354  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:28.619489  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.619807  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:32.117311  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:32.277582  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.778394  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.622010  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:33.119518  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:35.119736  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.613788  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.277007  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.776793  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:37.121196  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.619239  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:38.616664  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:41.112900  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:41.777952  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.276802  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:42.119128  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.119255  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:43.114941  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:45.614095  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:47.616615  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.277300  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.777275  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.119389  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.618309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.116327  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.614990  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:50.777563  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.778761  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.276863  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.619469  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:53.119593  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.116184  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:57.613355  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:57.776955  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.276381  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.619683  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:58.122772  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:59.616518  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.115379  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.613248  123537 pod_ready.go:81] duration metric: took 4m0.006848891s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:02.613273  123537 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:02.613280  123537 pod_ready.go:38] duration metric: took 4m5.267062496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:02.613297  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:02.613347  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:02.613393  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:02.670107  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:02.670139  123537 cri.go:89] found id: ""
	I0316 00:21:02.670149  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:02.670210  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.675144  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:02.675212  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:02.720695  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:02.720720  123537 cri.go:89] found id: ""
	I0316 00:21:02.720729  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:02.720790  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.725490  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:02.725570  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.276825  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.779811  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.617765  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.619210  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.619603  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.778908  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:02.778959  123537 cri.go:89] found id: ""
	I0316 00:21:02.778971  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:02.779028  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.784772  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:02.784864  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:02.830682  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:02.830709  123537 cri.go:89] found id: ""
	I0316 00:21:02.830719  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:02.830784  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.835733  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:02.835813  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:02.875862  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:02.875890  123537 cri.go:89] found id: ""
	I0316 00:21:02.875902  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:02.875967  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.880801  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:02.880857  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:02.921585  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:02.921611  123537 cri.go:89] found id: ""
	I0316 00:21:02.921622  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:02.921689  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.929521  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:02.929593  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.977621  123537 cri.go:89] found id: ""
	I0316 00:21:02.977646  123537 logs.go:276] 0 containers: []
	W0316 00:21:02.977657  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.977668  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:02.977723  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:03.020159  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.020186  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.020193  123537 cri.go:89] found id: ""
	I0316 00:21:03.020204  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:03.020274  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.025593  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.030718  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:03.030744  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:03.090141  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:03.090182  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:03.147416  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:03.147466  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:03.189686  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:03.189733  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:03.245980  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:03.246020  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.296494  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:03.296534  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:03.349602  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:03.349635  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:03.364783  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:03.364819  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:03.513917  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:03.513955  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:03.567916  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:03.567952  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:03.607620  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:03.607658  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:03.658683  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:03.658717  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.699797  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:03.699827  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:06.715440  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:06.733725  123537 api_server.go:72] duration metric: took 4m16.598062692s to wait for apiserver process to appear ...
	I0316 00:21:06.733759  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:06.733810  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:06.733868  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:06.775396  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:06.775431  123537 cri.go:89] found id: ""
	I0316 00:21:06.775442  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:06.775506  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.780448  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:06.780503  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:06.836927  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:06.836962  123537 cri.go:89] found id: ""
	I0316 00:21:06.836972  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:06.837025  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.841803  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:06.841869  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:06.887445  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:06.887470  123537 cri.go:89] found id: ""
	I0316 00:21:06.887479  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:06.887534  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.892112  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:06.892192  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:06.936614  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:06.936642  123537 cri.go:89] found id: ""
	I0316 00:21:06.936653  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:06.936717  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.943731  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:06.943799  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:06.986738  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:06.986764  123537 cri.go:89] found id: ""
	I0316 00:21:06.986774  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:06.986843  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.991555  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:06.991621  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:07.052047  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:07.052074  123537 cri.go:89] found id: ""
	I0316 00:21:07.052082  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:07.052133  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.057297  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:07.057358  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:07.104002  123537 cri.go:89] found id: ""
	I0316 00:21:07.104034  123537 logs.go:276] 0 containers: []
	W0316 00:21:07.104042  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:07.104049  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:07.104113  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:07.148540  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:07.148562  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:07.148566  123537 cri.go:89] found id: ""
	I0316 00:21:07.148572  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:07.148620  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.153502  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.157741  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:07.157770  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:07.197856  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:07.197889  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:07.654282  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:07.654324  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:07.708539  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:07.708579  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:07.725072  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:07.725104  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.277657  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.780721  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.121773  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.619756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.862465  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:07.862498  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:07.925812  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:07.925846  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:07.986121  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:07.986152  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:08.036774  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:08.036817  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:08.091902  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:08.091933  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:08.142096  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:08.142128  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:08.210747  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:08.210789  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:08.270225  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:08.270259  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:10.817112  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:21:10.822359  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:21:10.823955  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:10.823978  123537 api_server.go:131] duration metric: took 4.090210216s to wait for apiserver health ...
	I0316 00:21:10.823988  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:10.824019  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:10.824076  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:10.872487  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:10.872514  123537 cri.go:89] found id: ""
	I0316 00:21:10.872524  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:10.872590  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.877131  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:10.877197  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:10.916699  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:10.916728  123537 cri.go:89] found id: ""
	I0316 00:21:10.916737  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:10.916797  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.921114  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:10.921182  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:10.964099  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:10.964123  123537 cri.go:89] found id: ""
	I0316 00:21:10.964132  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:10.964191  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.968716  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:10.968788  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.008883  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.008909  123537 cri.go:89] found id: ""
	I0316 00:21:11.008919  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:11.008974  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.014068  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.014138  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.067209  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.067239  123537 cri.go:89] found id: ""
	I0316 00:21:11.067251  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:11.067315  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.072536  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.072663  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.119366  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.119399  123537 cri.go:89] found id: ""
	I0316 00:21:11.119411  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:11.119462  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.124502  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.124590  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.169458  123537 cri.go:89] found id: ""
	I0316 00:21:11.169494  123537 logs.go:276] 0 containers: []
	W0316 00:21:11.169505  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.169513  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:11.169576  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:11.218886  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:11.218923  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:11.218928  123537 cri.go:89] found id: ""
	I0316 00:21:11.218938  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:11.219002  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.223583  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.228729  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:11.228753  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:11.282781  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:11.282818  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:11.347330  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:11.347379  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.401191  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:11.401225  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.453126  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:11.453158  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.523058  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.523110  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.944108  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.944157  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:12.001558  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:12.001602  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:12.062833  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:12.062885  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:12.078726  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:12.078762  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:12.209248  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:12.209284  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:12.251891  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:12.251930  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:12.296240  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:12.296271  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:14.846244  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:14.846274  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.846279  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.846283  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.846287  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.846290  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.846294  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.846299  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.846302  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.846309  123537 system_pods.go:74] duration metric: took 4.022315588s to wait for pod list to return data ...
	I0316 00:21:14.846317  123537 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:14.848830  123537 default_sa.go:45] found service account: "default"
	I0316 00:21:14.848852  123537 default_sa.go:55] duration metric: took 2.529805ms for default service account to be created ...
	I0316 00:21:14.848859  123537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:14.861369  123537 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:14.861396  123537 system_pods.go:89] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.861401  123537 system_pods.go:89] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.861405  123537 system_pods.go:89] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.861409  123537 system_pods.go:89] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.861448  123537 system_pods.go:89] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.861456  123537 system_pods.go:89] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.861465  123537 system_pods.go:89] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.861470  123537 system_pods.go:89] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.861478  123537 system_pods.go:126] duration metric: took 12.614437ms to wait for k8s-apps to be running ...
	I0316 00:21:14.861488  123537 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:14.861534  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:14.879439  123537 system_svc.go:56] duration metric: took 17.934537ms WaitForService to wait for kubelet
	I0316 00:21:14.879484  123537 kubeadm.go:576] duration metric: took 4m24.743827748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:14.879523  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:14.882642  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:14.882673  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:14.882716  123537 node_conditions.go:105] duration metric: took 3.184841ms to run NodePressure ...
	I0316 00:21:14.882733  123537 start.go:240] waiting for startup goroutines ...
	I0316 00:21:14.882749  123537 start.go:245] waiting for cluster config update ...
	I0316 00:21:14.882789  123537 start.go:254] writing updated cluster config ...
	I0316 00:21:14.883119  123537 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:14.937804  123537 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:14.939886  123537 out.go:177] * Done! kubectl is now configured to use "embed-certs-666637" cluster and "default" namespace by default
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:12.278383  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.279769  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:12.124356  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.619164  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:16.777597  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.277188  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.119492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.119935  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.278136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:22.779721  123819 pod_ready.go:81] duration metric: took 4m0.010022344s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:22.779752  123819 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:22.779762  123819 pod_ready.go:38] duration metric: took 4m5.913207723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:22.779779  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:22.779814  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:22.779876  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:22.836022  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:22.836058  123819 cri.go:89] found id: ""
	I0316 00:21:22.836069  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:22.836131  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.841289  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:22.841362  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:22.883980  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:22.884007  123819 cri.go:89] found id: ""
	I0316 00:21:22.884018  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:22.884084  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.889352  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:22.889427  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:22.929947  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:22.929977  123819 cri.go:89] found id: ""
	I0316 00:21:22.929987  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:22.930033  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.935400  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:22.935485  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:22.975548  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:22.975580  123819 cri.go:89] found id: ""
	I0316 00:21:22.975598  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:22.975671  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.981916  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:22.981998  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.019925  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.019965  123819 cri.go:89] found id: ""
	I0316 00:21:23.019977  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:23.020046  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.024870  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.024960  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.068210  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.068241  123819 cri.go:89] found id: ""
	I0316 00:21:23.068253  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:23.068344  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.073492  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.073578  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.113267  123819 cri.go:89] found id: ""
	I0316 00:21:23.113301  123819 logs.go:276] 0 containers: []
	W0316 00:21:23.113311  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.113319  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:23.113382  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:23.160155  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:23.160175  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.160179  123819 cri.go:89] found id: ""
	I0316 00:21:23.160192  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:23.160241  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.165125  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.169508  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:23.169530  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.218749  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:23.218786  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.274140  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:23.274177  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.320515  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:23.320559  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:23.835119  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:23.835173  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:23.907635  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.907691  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.925071  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:23.925126  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:23.991996  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:23.992028  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:24.032865  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.032899  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.090947  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:24.090987  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:24.285862  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:24.285896  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:24.337983  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:24.338027  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:24.379626  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:24.379657  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:21.618894  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:24.122648  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:26.918844  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.938014  123819 api_server.go:72] duration metric: took 4m17.276244s to wait for apiserver process to appear ...
	I0316 00:21:26.938053  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:26.938095  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:26.938157  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:26.983515  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:26.983538  123819 cri.go:89] found id: ""
	I0316 00:21:26.983546  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:26.983595  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:26.989278  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:26.989341  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:27.039968  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.040000  123819 cri.go:89] found id: ""
	I0316 00:21:27.040009  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:27.040078  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.045617  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:27.045687  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:27.085920  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.085948  123819 cri.go:89] found id: ""
	I0316 00:21:27.085960  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:27.086029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.090911  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:27.090989  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:27.137289  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:27.137322  123819 cri.go:89] found id: ""
	I0316 00:21:27.137333  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:27.137393  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.141956  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:27.142031  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:27.180823  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.180845  123819 cri.go:89] found id: ""
	I0316 00:21:27.180854  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:27.180919  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.185439  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:27.185523  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:27.225775  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:27.225797  123819 cri.go:89] found id: ""
	I0316 00:21:27.225805  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:27.225854  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.230648  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:27.230717  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:27.269429  123819 cri.go:89] found id: ""
	I0316 00:21:27.269465  123819 logs.go:276] 0 containers: []
	W0316 00:21:27.269477  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:27.269485  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:27.269550  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:27.308288  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.308316  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.308321  123819 cri.go:89] found id: ""
	I0316 00:21:27.308329  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:27.308378  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.312944  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.317794  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:27.317829  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:27.364287  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:27.364323  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.419482  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:27.419521  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.468553  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:27.468585  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.513287  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:27.513320  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.561382  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:27.561426  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.601292  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:27.601325  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:27.656848  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:27.656902  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:27.796212  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:27.796245  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:28.246569  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:28.246611  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:28.302971  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:28.303015  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:28.359613  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:28.359645  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:28.375844  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:28.375877  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:26.124217  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:28.619599  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:30.921320  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:21:30.926064  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:21:30.927332  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:30.927353  123819 api_server.go:131] duration metric: took 3.989292523s to wait for apiserver health ...
	I0316 00:21:30.927361  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:30.927386  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:30.927438  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:30.975348  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:30.975376  123819 cri.go:89] found id: ""
	I0316 00:21:30.975389  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:30.975459  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:30.980128  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:30.980194  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:31.029534  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.029563  123819 cri.go:89] found id: ""
	I0316 00:21:31.029574  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:31.029627  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.034066  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:31.034149  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:31.073857  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.073884  123819 cri.go:89] found id: ""
	I0316 00:21:31.073892  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:31.073961  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.078421  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:31.078501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:31.117922  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.117951  123819 cri.go:89] found id: ""
	I0316 00:21:31.117964  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:31.118029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.122435  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:31.122501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:31.161059  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.161089  123819 cri.go:89] found id: ""
	I0316 00:21:31.161101  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:31.161155  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.165503  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:31.165572  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:31.207637  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.207669  123819 cri.go:89] found id: ""
	I0316 00:21:31.207679  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:31.207742  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.212296  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:31.212360  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:31.251480  123819 cri.go:89] found id: ""
	I0316 00:21:31.251519  123819 logs.go:276] 0 containers: []
	W0316 00:21:31.251530  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:31.251539  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:31.251608  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:31.296321  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.296345  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.296350  123819 cri.go:89] found id: ""
	I0316 00:21:31.296357  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:31.296414  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.302159  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.306501  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:31.306526  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.348347  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:31.348379  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.388542  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:31.388573  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:31.439926  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:31.439962  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:31.499674  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:31.499711  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:31.552720  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:31.552771  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.605281  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:31.605331  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.651964  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:31.651997  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.696113  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:31.696150  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.749712  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:31.749751  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.801476  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:31.801508  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:32.236105  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:32.236146  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:32.253815  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:32.253848  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:34.930730  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:34.930759  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.930763  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.930767  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.930772  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.930775  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.930778  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.930783  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.930788  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.930798  123819 system_pods.go:74] duration metric: took 4.003426137s to wait for pod list to return data ...
	I0316 00:21:34.930807  123819 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:34.933462  123819 default_sa.go:45] found service account: "default"
	I0316 00:21:34.933492  123819 default_sa.go:55] duration metric: took 2.674728ms for default service account to be created ...
	I0316 00:21:34.933500  123819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:34.939351  123819 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:34.939382  123819 system_pods.go:89] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.939393  123819 system_pods.go:89] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.939400  123819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.939406  123819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.939414  123819 system_pods.go:89] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.939420  123819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.939442  123819 system_pods.go:89] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.939454  123819 system_pods.go:89] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.939469  123819 system_pods.go:126] duration metric: took 5.962328ms to wait for k8s-apps to be running ...
	I0316 00:21:34.939482  123819 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:34.939539  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:34.958068  123819 system_svc.go:56] duration metric: took 18.572929ms WaitForService to wait for kubelet
	I0316 00:21:34.958108  123819 kubeadm.go:576] duration metric: took 4m25.296341727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:34.958130  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:34.962603  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:34.962629  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:34.962641  123819 node_conditions.go:105] duration metric: took 4.505615ms to run NodePressure ...
	I0316 00:21:34.962657  123819 start.go:240] waiting for startup goroutines ...
	I0316 00:21:34.962667  123819 start.go:245] waiting for cluster config update ...
	I0316 00:21:34.962690  123819 start.go:254] writing updated cluster config ...
	I0316 00:21:34.963009  123819 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:35.015774  123819 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:35.019103  123819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-313436" cluster and "default" namespace by default
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:21:31.121456  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:33.122437  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:35.618906  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:37.619223  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:40.120743  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:42.619309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:44.619544  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:47.120179  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:49.619419  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:52.124510  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:54.125147  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:56.621651  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:59.120895  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:01.618287  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:03.620297  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:06.119870  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:08.122618  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:10.619464  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.121381  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:15.619590  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.122483  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:19.112568  123454 pod_ready.go:81] duration metric: took 4m0.000767313s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	E0316 00:22:19.112600  123454 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0316 00:22:19.112621  123454 pod_ready.go:38] duration metric: took 4m15.544198169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:22:19.112652  123454 kubeadm.go:591] duration metric: took 4m23.072115667s to restartPrimaryControlPlane
	W0316 00:22:19.112713  123454 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:22:19.112769  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:51.249327  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.136527598s)
	I0316 00:22:51.249406  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:22:51.268404  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:22:51.280832  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:22:51.292639  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:22:51.292661  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:22:51.292712  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:22:51.303272  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:22:51.303347  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:22:51.313854  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:22:51.324290  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:22:51.324361  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:22:51.334879  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.345302  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:22:51.345382  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.355682  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:22:51.366601  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:22:51.366660  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:22:51.377336  123454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:22:51.594624  123454 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:00.473055  123454 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0316 00:23:00.473140  123454 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:00.473255  123454 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:00.473415  123454 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:00.473551  123454 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:00.473682  123454 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:00.475591  123454 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:00.475704  123454 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:00.475803  123454 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:00.475905  123454 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:00.476001  123454 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:00.476100  123454 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:00.476190  123454 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:00.476281  123454 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:00.476378  123454 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:00.476516  123454 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:00.476647  123454 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:00.476715  123454 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:00.476801  123454 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:00.476879  123454 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:00.476968  123454 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0316 00:23:00.477042  123454 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:00.477166  123454 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:00.477253  123454 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:00.477378  123454 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:00.477480  123454 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:00.479084  123454 out.go:204]   - Booting up control plane ...
	I0316 00:23:00.479206  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:00.479332  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:00.479440  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:00.479541  123454 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:00.479625  123454 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:00.479697  123454 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:00.479874  123454 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:23:00.479994  123454 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003092 seconds
	I0316 00:23:00.480139  123454 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:23:00.480339  123454 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:23:00.480445  123454 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:23:00.480687  123454 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-238598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:23:00.480789  123454 kubeadm.go:309] [bootstrap-token] Using token: aspuu8.i4yhgkjx7e43mgmn
	I0316 00:23:00.482437  123454 out.go:204]   - Configuring RBAC rules ...
	I0316 00:23:00.482568  123454 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:23:00.482697  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:23:00.482917  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:23:00.483119  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:23:00.483283  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:23:00.483406  123454 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:23:00.483582  123454 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:23:00.483653  123454 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:23:00.483714  123454 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:23:00.483720  123454 kubeadm.go:309] 
	I0316 00:23:00.483815  123454 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:23:00.483833  123454 kubeadm.go:309] 
	I0316 00:23:00.483973  123454 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:23:00.483986  123454 kubeadm.go:309] 
	I0316 00:23:00.484014  123454 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:23:00.484119  123454 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:23:00.484200  123454 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:23:00.484211  123454 kubeadm.go:309] 
	I0316 00:23:00.484283  123454 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:23:00.484288  123454 kubeadm.go:309] 
	I0316 00:23:00.484360  123454 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:23:00.484366  123454 kubeadm.go:309] 
	I0316 00:23:00.484452  123454 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:23:00.484560  123454 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:23:00.484657  123454 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:23:00.484666  123454 kubeadm.go:309] 
	I0316 00:23:00.484798  123454 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:23:00.484920  123454 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:23:00.484932  123454 kubeadm.go:309] 
	I0316 00:23:00.485053  123454 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485196  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:23:00.485227  123454 kubeadm.go:309] 	--control-plane 
	I0316 00:23:00.485241  123454 kubeadm.go:309] 
	I0316 00:23:00.485357  123454 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:23:00.485367  123454 kubeadm.go:309] 
	I0316 00:23:00.485488  123454 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485646  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:23:00.485661  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:23:00.485671  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:23:00.487417  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:23:00.489063  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:23:00.526147  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:23:00.571796  123454 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-238598 minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=no-preload-238598 minikube.k8s.io/primary=true
	I0316 00:23:00.892908  123454 ops.go:34] apiserver oom_adj: -16
	I0316 00:23:00.892994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.394077  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.893097  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.393114  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.893994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.393930  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.893428  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.393822  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.893810  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.393999  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.893998  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.393104  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.893725  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.393873  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.893432  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.394054  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.893595  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.393109  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.893621  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.393322  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.894024  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.393711  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.893465  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.393059  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.497890  123454 kubeadm.go:1107] duration metric: took 11.926069028s to wait for elevateKubeSystemPrivileges
	W0316 00:23:12.497951  123454 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:23:12.497962  123454 kubeadm.go:393] duration metric: took 5m16.508852945s to StartCluster
	I0316 00:23:12.497988  123454 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.498139  123454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:23:12.500632  123454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.500995  123454 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:23:12.502850  123454 out.go:177] * Verifying Kubernetes components...
	I0316 00:23:12.501089  123454 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:23:12.501233  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:23:12.504432  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:23:12.504443  123454 addons.go:69] Setting storage-provisioner=true in profile "no-preload-238598"
	I0316 00:23:12.504491  123454 addons.go:234] Setting addon storage-provisioner=true in "no-preload-238598"
	I0316 00:23:12.504502  123454 addons.go:69] Setting default-storageclass=true in profile "no-preload-238598"
	I0316 00:23:12.504515  123454 addons.go:69] Setting metrics-server=true in profile "no-preload-238598"
	I0316 00:23:12.504526  123454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-238598"
	I0316 00:23:12.504541  123454 addons.go:234] Setting addon metrics-server=true in "no-preload-238598"
	W0316 00:23:12.504551  123454 addons.go:243] addon metrics-server should already be in state true
	I0316 00:23:12.504582  123454 host.go:66] Checking if "no-preload-238598" exists ...
	W0316 00:23:12.504505  123454 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:23:12.504656  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.504996  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505012  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.505013  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505229  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.521634  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0316 00:23:12.521698  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0316 00:23:12.522283  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522377  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522836  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.522861  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.522990  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.523032  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.523203  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523375  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523737  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.523758  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524232  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.524277  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524695  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0316 00:23:12.525112  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.525610  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.525637  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.526025  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.526218  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.530010  123454 addons.go:234] Setting addon default-storageclass=true in "no-preload-238598"
	W0316 00:23:12.530029  123454 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:23:12.530053  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.530277  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.530315  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.540310  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0316 00:23:12.545850  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.545966  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0316 00:23:12.546335  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.546740  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.546761  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.547035  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.547232  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.548605  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.548626  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.549001  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.549058  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0316 00:23:12.549268  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.549323  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.549454  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.551419  123454 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:23:12.549975  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.551115  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.553027  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:23:12.553050  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:23:12.553074  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.553082  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.554948  123454 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:23:12.553404  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.556096  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556544  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.556568  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556640  123454 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.556660  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:23:12.556679  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.556769  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.557150  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.557176  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.557398  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.557600  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.557886  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.560220  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560555  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.560582  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560759  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.560982  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.561157  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.561318  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.574877  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0316 00:23:12.575802  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.576313  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.576337  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.576640  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.577015  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.578483  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.578814  123454 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.578835  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:23:12.578856  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.581832  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582439  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.582454  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.582465  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582635  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.582819  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.582969  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.729051  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:23:12.747162  123454 node_ready.go:35] waiting up to 6m0s for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.759957  123454 node_ready.go:49] node "no-preload-238598" has status "Ready":"True"
	I0316 00:23:12.759992  123454 node_ready.go:38] duration metric: took 12.79378ms for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.760006  123454 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.772201  123454 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795626  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.795660  123454 pod_ready.go:81] duration metric: took 23.429082ms for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795674  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808661  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.808688  123454 pod_ready.go:81] duration metric: took 13.006568ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808699  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821578  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.821613  123454 pod_ready.go:81] duration metric: took 12.904651ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821627  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.832585  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:23:12.832616  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:23:12.838375  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.838404  123454 pod_ready.go:81] duration metric: took 16.768452ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.838415  123454 pod_ready.go:38] duration metric: took 78.396172ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.838435  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:23:12.838522  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:23:12.889063  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.907225  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.924533  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:23:12.924565  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:23:12.947224  123454 api_server.go:72] duration metric: took 446.183679ms to wait for apiserver process to appear ...
	I0316 00:23:12.947257  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:23:12.947281  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:23:12.975463  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:12.975495  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:23:13.023702  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:23:13.039598  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:23:13.039638  123454 api_server.go:131] duration metric: took 92.372403ms to wait for apiserver health ...
	I0316 00:23:13.039649  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:23:13.069937  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:13.141358  123454 system_pods.go:59] 5 kube-system pods found
	I0316 00:23:13.141387  123454 system_pods.go:61] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.141391  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.141397  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.141400  123454 system_pods.go:61] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending
	I0316 00:23:13.141404  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.141411  123454 system_pods.go:74] duration metric: took 101.754765ms to wait for pod list to return data ...
	I0316 00:23:13.141419  123454 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:23:13.200153  123454 default_sa.go:45] found service account: "default"
	I0316 00:23:13.200193  123454 default_sa.go:55] duration metric: took 58.765381ms for default service account to be created ...
	I0316 00:23:13.200205  123454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:23:13.381398  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381431  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.381771  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.381825  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.381840  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.381849  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381862  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.382154  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.382159  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.382189  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.383303  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.383345  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.383353  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending
	I0316 00:23:13.383360  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.383368  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.383374  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.383384  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.383396  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.383440  123454 retry.go:31] will retry after 221.286986ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.408809  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.408839  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.409146  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.409191  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.409195  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.612171  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.612205  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612212  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612221  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.612226  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.612230  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.612236  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.612239  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.612260  123454 retry.go:31] will retry after 311.442515ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.934136  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.934170  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934177  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934185  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.934191  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.934197  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.934204  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.934210  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.934234  123454 retry.go:31] will retry after 453.147474ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.343055  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.435784176s)
	I0316 00:23:14.343123  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343139  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343497  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343523  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.343540  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343554  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343800  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.343876  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343895  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.404681  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.404725  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404738  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404748  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.404758  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.404767  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.404777  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.404790  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.404810  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.404821  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending
	I0316 00:23:14.404846  123454 retry.go:31] will retry after 464.575803ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.447649  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.377663696s)
	I0316 00:23:14.447706  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.447724  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448062  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448083  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448092  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.448100  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448367  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.448367  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448394  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448407  123454 addons.go:470] Verifying addon metrics-server=true in "no-preload-238598"
	I0316 00:23:14.450675  123454 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0316 00:23:14.452378  123454 addons.go:505] duration metric: took 1.951301533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0316 00:23:14.888167  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.888206  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:14.888219  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.888226  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.888236  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.888243  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.888252  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.888260  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.888292  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.888301  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:14.888325  123454 retry.go:31] will retry after 490.515879ms: missing components: kube-proxy
	I0316 00:23:15.389667  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:15.389694  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:15.389700  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Running
	I0316 00:23:15.389704  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:15.389708  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:15.389712  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:15.389716  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Running
	I0316 00:23:15.389721  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:15.389728  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:15.389735  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:15.389745  123454 system_pods.go:126] duration metric: took 2.189532563s to wait for k8s-apps to be running ...
	I0316 00:23:15.389757  123454 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:23:15.389805  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:15.409241  123454 system_svc.go:56] duration metric: took 19.469575ms WaitForService to wait for kubelet
	I0316 00:23:15.409273  123454 kubeadm.go:576] duration metric: took 2.908240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:23:15.409292  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:23:15.412530  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:23:15.412559  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:23:15.412570  123454 node_conditions.go:105] duration metric: took 3.272979ms to run NodePressure ...
	I0316 00:23:15.412585  123454 start.go:240] waiting for startup goroutines ...
	I0316 00:23:15.412594  123454 start.go:245] waiting for cluster config update ...
	I0316 00:23:15.412608  123454 start.go:254] writing updated cluster config ...
	I0316 00:23:15.412923  123454 ssh_runner.go:195] Run: rm -f paused
	I0316 00:23:15.468245  123454 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 00:23:15.470311  123454 out.go:177] * Done! kubectl is now configured to use "no-preload-238598" cluster and "default" namespace by default
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 
	
	
	==> CRI-O <==
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.334198804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549271334172194,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f2ee0d69-4934-4291-ae61-db435bb01ed0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.334869087Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ab122d27-16bf-4747-83a3-08e5caba48f9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.334929010Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ab122d27-16bf-4747-83a3-08e5caba48f9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.334993650Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ab122d27-16bf-4747-83a3-08e5caba48f9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.368551390Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d70b6958-c15d-4e38-a8cc-e5cf0b584efc name=/runtime.v1.RuntimeService/Version
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.368653729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d70b6958-c15d-4e38-a8cc-e5cf0b584efc name=/runtime.v1.RuntimeService/Version
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.369747111Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bcb156ab-8e02-4304-9474-1c141e3846e0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.370184890Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549271370157189,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bcb156ab-8e02-4304-9474-1c141e3846e0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.370727428Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=16e62246-a679-45eb-9bb4-1e217251f7ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.370809652Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=16e62246-a679-45eb-9bb4-1e217251f7ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.370845052Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=16e62246-a679-45eb-9bb4-1e217251f7ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.405129580Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6b50ffee-2e74-4cc8-bebe-98c568135a31 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.405205428Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6b50ffee-2e74-4cc8-bebe-98c568135a31 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.406386121Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f9a08356-ae6d-4ba3-bed2-b29b22659606 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.406918717Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549271406874683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f9a08356-ae6d-4ba3-bed2-b29b22659606 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.407554771Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a454a205-c8eb-4d07-a699-f5c6bd0c7cf5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.407631332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a454a205-c8eb-4d07-a699-f5c6bd0c7cf5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.407678357Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a454a205-c8eb-4d07-a699-f5c6bd0c7cf5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.444047590Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c94e0563-7f80-44b4-b8b5-ea6b5e3e6321 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.444182625Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c94e0563-7f80-44b4-b8b5-ea6b5e3e6321 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.445837363Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=fc271fbd-2c05-4e98-8cb4-23ad0104afa8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.446468609Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549271446436370,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=fc271fbd-2c05-4e98-8cb4-23ad0104afa8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.447114934Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77329407-8aff-472d-acc2-f8db2d4d7cb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.447184241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77329407-8aff-472d-acc2-f8db2d4d7cb0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:34:31 old-k8s-version-402923 crio[648]: time="2024-03-16 00:34:31.447257387Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=77329407-8aff-472d-acc2-f8db2d4d7cb0 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.061034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045188] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar16 00:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.786648] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.691488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.819996] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.063026] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069540] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.190023] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.172778] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.261353] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.077973] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.071596] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.890538] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +12.663086] kauditd_printk_skb: 46 callbacks suppressed
	[Mar16 00:21] systemd-fstab-generator[5049]: Ignoring "noauto" option for root device
	[Mar16 00:23] systemd-fstab-generator[5330]: Ignoring "noauto" option for root device
	[  +0.068986] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:34:31 up 17 min,  0 users,  load average: 0.00, 0.04, 0.05
	Linux old-k8s-version-402923 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc00009e0c0, 0xc000b98630)
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: goroutine 162 [select]:
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bc1ef0, 0x4f0ac20, 0xc000204460, 0x1, 0xc00009e0c0)
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0001ce7e0, 0xc00009e0c0)
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000985080, 0xc000b928c0)
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 16 00:34:26 old-k8s-version-402923 kubelet[6511]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 16 00:34:26 old-k8s-version-402923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 16 00:34:26 old-k8s-version-402923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 16 00:34:27 old-k8s-version-402923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 16 00:34:27 old-k8s-version-402923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 16 00:34:27 old-k8s-version-402923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 16 00:34:27 old-k8s-version-402923 kubelet[6520]: I0316 00:34:27.257451    6520 server.go:416] Version: v1.20.0
	Mar 16 00:34:27 old-k8s-version-402923 kubelet[6520]: I0316 00:34:27.257774    6520 server.go:837] Client rotation is on, will bootstrap in background
	Mar 16 00:34:27 old-k8s-version-402923 kubelet[6520]: I0316 00:34:27.260026    6520 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 16 00:34:27 old-k8s-version-402923 kubelet[6520]: W0316 00:34:27.261007    6520 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 16 00:34:27 old-k8s-version-402923 kubelet[6520]: I0316 00:34:27.261731    6520 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (277.51294ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-402923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (397.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-666637 -n embed-certs-666637
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-16 00:36:54.827203409 +0000 UTC m=+6045.250054691
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-666637 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-666637 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.448µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-666637 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-666637 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-666637 logs -n 25: (1.510094228s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:36 UTC | 16 Mar 24 00:36 UTC |
	| start   | -p newest-cni-143629 --memory=2200 --alsologtostderr   | newest-cni-143629            | jenkins | v1.32.0 | 16 Mar 24 00:36 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:36 UTC | 16 Mar 24 00:36 UTC |
	| start   | -p auto-869135 --memory=3072                           | auto-869135                  | jenkins | v1.32.0 | 16 Mar 24 00:36 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:36:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:36:26.537069  129541 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:36:26.537336  129541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:36:26.537346  129541 out.go:304] Setting ErrFile to fd 2...
	I0316 00:36:26.537350  129541 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:36:26.537555  129541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:36:26.538165  129541 out.go:298] Setting JSON to false
	I0316 00:36:26.539181  129541 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11937,"bootTime":1710537450,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:36:26.539242  129541 start.go:139] virtualization: kvm guest
	I0316 00:36:26.541172  129541 out.go:177] * [auto-869135] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:36:26.542744  129541 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:36:26.542776  129541 notify.go:220] Checking for updates...
	I0316 00:36:26.543970  129541 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:36:26.545289  129541 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:36:26.546384  129541 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:36:26.547508  129541 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:36:26.548644  129541 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:36:26.550253  129541 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:36:26.550360  129541 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:36:26.550481  129541 config.go:182] Loaded profile config "newest-cni-143629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:36:26.550597  129541 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:36:26.588778  129541 out.go:177] * Using the kvm2 driver based on user configuration
	I0316 00:36:26.590018  129541 start.go:297] selected driver: kvm2
	I0316 00:36:26.590042  129541 start.go:901] validating driver "kvm2" against <nil>
	I0316 00:36:26.590055  129541 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:36:26.590961  129541 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:36:26.591050  129541 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:36:26.607657  129541 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:36:26.607716  129541 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0316 00:36:26.608000  129541 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:36:26.608064  129541 cni.go:84] Creating CNI manager for ""
	I0316 00:36:26.608082  129541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:36:26.608094  129541 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 00:36:26.608171  129541 start.go:340] cluster config:
	{Name:auto-869135 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-869135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgent
PID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:36:26.608283  129541 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:36:26.610176  129541 out.go:177] * Starting "auto-869135" primary control-plane node in "auto-869135" cluster
	I0316 00:36:23.754282  129288 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0316 00:36:23.754426  129288 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:36:23.754462  129288 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:36:23.770531  129288 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40263
	I0316 00:36:23.770995  129288 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:36:23.771631  129288 main.go:141] libmachine: Using API Version  1
	I0316 00:36:23.771654  129288 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:36:23.771992  129288 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:36:23.772212  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:36:23.772394  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:23.772587  129288 start.go:159] libmachine.API.Create for "newest-cni-143629" (driver="kvm2")
	I0316 00:36:23.772622  129288 client.go:168] LocalClient.Create starting
	I0316 00:36:23.772658  129288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0316 00:36:23.772704  129288 main.go:141] libmachine: Decoding PEM data...
	I0316 00:36:23.772729  129288 main.go:141] libmachine: Parsing certificate...
	I0316 00:36:23.772808  129288 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0316 00:36:23.772836  129288 main.go:141] libmachine: Decoding PEM data...
	I0316 00:36:23.772853  129288 main.go:141] libmachine: Parsing certificate...
	I0316 00:36:23.772879  129288 main.go:141] libmachine: Running pre-create checks...
	I0316 00:36:23.772899  129288 main.go:141] libmachine: (newest-cni-143629) Calling .PreCreateCheck
	I0316 00:36:23.773269  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetConfigRaw
	I0316 00:36:23.773717  129288 main.go:141] libmachine: Creating machine...
	I0316 00:36:23.773733  129288 main.go:141] libmachine: (newest-cni-143629) Calling .Create
	I0316 00:36:23.773876  129288 main.go:141] libmachine: (newest-cni-143629) Creating KVM machine...
	I0316 00:36:23.775228  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found existing default KVM network
	I0316 00:36:23.776785  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:23.776619  129314 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00025e050}
	I0316 00:36:23.784138  129288 main.go:141] libmachine: (newest-cni-143629) DBG | trying to create private KVM network mk-newest-cni-143629 192.168.39.0/24...
	I0316 00:36:23.871914  129288 main.go:141] libmachine: (newest-cni-143629) DBG | private KVM network mk-newest-cni-143629 192.168.39.0/24 created
	I0316 00:36:23.871975  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:23.871876  129314 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:36:23.871992  129288 main.go:141] libmachine: (newest-cni-143629) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629 ...
	I0316 00:36:23.872018  129288 main.go:141] libmachine: (newest-cni-143629) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0316 00:36:23.872036  129288 main.go:141] libmachine: (newest-cni-143629) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0316 00:36:24.131653  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:24.131529  129314 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa...
	I0316 00:36:24.262233  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:24.262101  129314 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/newest-cni-143629.rawdisk...
	I0316 00:36:24.262262  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Writing magic tar header
	I0316 00:36:24.262287  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Writing SSH key tar header
	I0316 00:36:24.262307  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:24.262267  129314 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629 ...
	I0316 00:36:24.262449  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629
	I0316 00:36:24.262478  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0316 00:36:24.262492  129288 main.go:141] libmachine: (newest-cni-143629) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629 (perms=drwx------)
	I0316 00:36:24.262506  129288 main.go:141] libmachine: (newest-cni-143629) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0316 00:36:24.262512  129288 main.go:141] libmachine: (newest-cni-143629) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0316 00:36:24.262520  129288 main.go:141] libmachine: (newest-cni-143629) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0316 00:36:24.262543  129288 main.go:141] libmachine: (newest-cni-143629) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0316 00:36:24.262571  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:36:24.262600  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0316 00:36:24.262616  129288 main.go:141] libmachine: (newest-cni-143629) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0316 00:36:24.262634  129288 main.go:141] libmachine: (newest-cni-143629) Creating domain...
	I0316 00:36:24.262650  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0316 00:36:24.262663  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home/jenkins
	I0316 00:36:24.262695  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Checking permissions on dir: /home
	I0316 00:36:24.262725  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Skipping /home - not owner
	I0316 00:36:24.263775  129288 main.go:141] libmachine: (newest-cni-143629) define libvirt domain using xml: 
	I0316 00:36:24.263797  129288 main.go:141] libmachine: (newest-cni-143629) <domain type='kvm'>
	I0316 00:36:24.263808  129288 main.go:141] libmachine: (newest-cni-143629)   <name>newest-cni-143629</name>
	I0316 00:36:24.263815  129288 main.go:141] libmachine: (newest-cni-143629)   <memory unit='MiB'>2200</memory>
	I0316 00:36:24.263827  129288 main.go:141] libmachine: (newest-cni-143629)   <vcpu>2</vcpu>
	I0316 00:36:24.263844  129288 main.go:141] libmachine: (newest-cni-143629)   <features>
	I0316 00:36:24.263853  129288 main.go:141] libmachine: (newest-cni-143629)     <acpi/>
	I0316 00:36:24.263865  129288 main.go:141] libmachine: (newest-cni-143629)     <apic/>
	I0316 00:36:24.263889  129288 main.go:141] libmachine: (newest-cni-143629)     <pae/>
	I0316 00:36:24.263917  129288 main.go:141] libmachine: (newest-cni-143629)     
	I0316 00:36:24.263931  129288 main.go:141] libmachine: (newest-cni-143629)   </features>
	I0316 00:36:24.263951  129288 main.go:141] libmachine: (newest-cni-143629)   <cpu mode='host-passthrough'>
	I0316 00:36:24.263959  129288 main.go:141] libmachine: (newest-cni-143629)   
	I0316 00:36:24.263963  129288 main.go:141] libmachine: (newest-cni-143629)   </cpu>
	I0316 00:36:24.263971  129288 main.go:141] libmachine: (newest-cni-143629)   <os>
	I0316 00:36:24.264003  129288 main.go:141] libmachine: (newest-cni-143629)     <type>hvm</type>
	I0316 00:36:24.264017  129288 main.go:141] libmachine: (newest-cni-143629)     <boot dev='cdrom'/>
	I0316 00:36:24.264033  129288 main.go:141] libmachine: (newest-cni-143629)     <boot dev='hd'/>
	I0316 00:36:24.264045  129288 main.go:141] libmachine: (newest-cni-143629)     <bootmenu enable='no'/>
	I0316 00:36:24.264053  129288 main.go:141] libmachine: (newest-cni-143629)   </os>
	I0316 00:36:24.264059  129288 main.go:141] libmachine: (newest-cni-143629)   <devices>
	I0316 00:36:24.264066  129288 main.go:141] libmachine: (newest-cni-143629)     <disk type='file' device='cdrom'>
	I0316 00:36:24.264080  129288 main.go:141] libmachine: (newest-cni-143629)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/boot2docker.iso'/>
	I0316 00:36:24.264090  129288 main.go:141] libmachine: (newest-cni-143629)       <target dev='hdc' bus='scsi'/>
	I0316 00:36:24.264125  129288 main.go:141] libmachine: (newest-cni-143629)       <readonly/>
	I0316 00:36:24.264152  129288 main.go:141] libmachine: (newest-cni-143629)     </disk>
	I0316 00:36:24.264170  129288 main.go:141] libmachine: (newest-cni-143629)     <disk type='file' device='disk'>
	I0316 00:36:24.264184  129288 main.go:141] libmachine: (newest-cni-143629)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0316 00:36:24.264223  129288 main.go:141] libmachine: (newest-cni-143629)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/newest-cni-143629.rawdisk'/>
	I0316 00:36:24.264256  129288 main.go:141] libmachine: (newest-cni-143629)       <target dev='hda' bus='virtio'/>
	I0316 00:36:24.264269  129288 main.go:141] libmachine: (newest-cni-143629)     </disk>
	I0316 00:36:24.264281  129288 main.go:141] libmachine: (newest-cni-143629)     <interface type='network'>
	I0316 00:36:24.264295  129288 main.go:141] libmachine: (newest-cni-143629)       <source network='mk-newest-cni-143629'/>
	I0316 00:36:24.264306  129288 main.go:141] libmachine: (newest-cni-143629)       <model type='virtio'/>
	I0316 00:36:24.264318  129288 main.go:141] libmachine: (newest-cni-143629)     </interface>
	I0316 00:36:24.264331  129288 main.go:141] libmachine: (newest-cni-143629)     <interface type='network'>
	I0316 00:36:24.264343  129288 main.go:141] libmachine: (newest-cni-143629)       <source network='default'/>
	I0316 00:36:24.264355  129288 main.go:141] libmachine: (newest-cni-143629)       <model type='virtio'/>
	I0316 00:36:24.264368  129288 main.go:141] libmachine: (newest-cni-143629)     </interface>
	I0316 00:36:24.264376  129288 main.go:141] libmachine: (newest-cni-143629)     <serial type='pty'>
	I0316 00:36:24.264389  129288 main.go:141] libmachine: (newest-cni-143629)       <target port='0'/>
	I0316 00:36:24.264399  129288 main.go:141] libmachine: (newest-cni-143629)     </serial>
	I0316 00:36:24.264415  129288 main.go:141] libmachine: (newest-cni-143629)     <console type='pty'>
	I0316 00:36:24.264436  129288 main.go:141] libmachine: (newest-cni-143629)       <target type='serial' port='0'/>
	I0316 00:36:24.264450  129288 main.go:141] libmachine: (newest-cni-143629)     </console>
	I0316 00:36:24.264462  129288 main.go:141] libmachine: (newest-cni-143629)     <rng model='virtio'>
	I0316 00:36:24.264472  129288 main.go:141] libmachine: (newest-cni-143629)       <backend model='random'>/dev/random</backend>
	I0316 00:36:24.264484  129288 main.go:141] libmachine: (newest-cni-143629)     </rng>
	I0316 00:36:24.264517  129288 main.go:141] libmachine: (newest-cni-143629)     
	I0316 00:36:24.264539  129288 main.go:141] libmachine: (newest-cni-143629)     
	I0316 00:36:24.264554  129288 main.go:141] libmachine: (newest-cni-143629)   </devices>
	I0316 00:36:24.264571  129288 main.go:141] libmachine: (newest-cni-143629) </domain>
	I0316 00:36:24.264582  129288 main.go:141] libmachine: (newest-cni-143629) 
	I0316 00:36:24.269243  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:68:2e:b0 in network default
	I0316 00:36:24.269780  129288 main.go:141] libmachine: (newest-cni-143629) Ensuring networks are active...
	I0316 00:36:24.269815  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:24.270577  129288 main.go:141] libmachine: (newest-cni-143629) Ensuring network default is active
	I0316 00:36:24.270921  129288 main.go:141] libmachine: (newest-cni-143629) Ensuring network mk-newest-cni-143629 is active
	I0316 00:36:24.271527  129288 main.go:141] libmachine: (newest-cni-143629) Getting domain xml...
	I0316 00:36:24.272359  129288 main.go:141] libmachine: (newest-cni-143629) Creating domain...
	I0316 00:36:25.659146  129288 main.go:141] libmachine: (newest-cni-143629) Waiting to get IP...
	I0316 00:36:25.659882  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:25.660341  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:25.660372  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:25.660308  129314 retry.go:31] will retry after 192.563552ms: waiting for machine to come up
	I0316 00:36:25.854870  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:25.855512  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:25.855540  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:25.855454  129314 retry.go:31] will retry after 237.903553ms: waiting for machine to come up
	I0316 00:36:26.219698  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:26.220328  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:26.220372  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:26.220282  129314 retry.go:31] will retry after 354.003084ms: waiting for machine to come up
	I0316 00:36:26.575945  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:26.576364  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:26.576412  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:26.576319  129314 retry.go:31] will retry after 440.400678ms: waiting for machine to come up
	I0316 00:36:27.017953  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:27.018421  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:27.018449  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:27.018369  129314 retry.go:31] will retry after 705.575144ms: waiting for machine to come up
	I0316 00:36:27.726465  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:27.726943  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:27.726973  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:27.726888  129314 retry.go:31] will retry after 714.800114ms: waiting for machine to come up
	I0316 00:36:28.443626  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:28.444212  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:28.444244  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:28.444148  129314 retry.go:31] will retry after 1.08667369s: waiting for machine to come up
	I0316 00:36:26.611619  129541 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:36:26.611659  129541 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0316 00:36:26.611667  129541 cache.go:56] Caching tarball of preloaded images
	I0316 00:36:26.611760  129541 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:36:26.611776  129541 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0316 00:36:26.611884  129541 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/auto-869135/config.json ...
	I0316 00:36:26.611906  129541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/auto-869135/config.json: {Name:mk83267919436076fc8b00c054fefc9fd338b866 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:36:26.612069  129541 start.go:360] acquireMachinesLock for auto-869135: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:36:29.532896  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:29.533319  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:29.533343  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:29.533272  129314 retry.go:31] will retry after 1.449766757s: waiting for machine to come up
	I0316 00:36:30.985028  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:30.985616  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:30.985640  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:30.985556  129314 retry.go:31] will retry after 1.325586186s: waiting for machine to come up
	I0316 00:36:32.312451  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:32.312971  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:32.312998  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:32.312925  129314 retry.go:31] will retry after 1.573418957s: waiting for machine to come up
	I0316 00:36:33.887591  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:33.888144  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:33.888175  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:33.888092  129314 retry.go:31] will retry after 2.631176069s: waiting for machine to come up
	I0316 00:36:36.523083  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:36.523544  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:36.523575  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:36.523501  129314 retry.go:31] will retry after 3.436480386s: waiting for machine to come up
	I0316 00:36:39.961636  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:39.962031  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:39.962059  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:39.961980  129314 retry.go:31] will retry after 3.992834089s: waiting for machine to come up
	I0316 00:36:43.955863  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:43.956365  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:36:43.956398  129288 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:36:43.956295  129314 retry.go:31] will retry after 4.819448418s: waiting for machine to come up
	I0316 00:36:50.424423  129541 start.go:364] duration metric: took 23.812326711s to acquireMachinesLock for "auto-869135"
	I0316 00:36:50.424518  129541 start.go:93] Provisioning new machine with config: &{Name:auto-869135 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.28.4 ClusterName:auto-869135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:36:50.424683  129541 start.go:125] createHost starting for "" (driver="kvm2")
	I0316 00:36:50.427057  129541 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0316 00:36:50.427257  129541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:36:50.427304  129541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:36:50.446762  129541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0316 00:36:50.447226  129541 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:36:50.447860  129541 main.go:141] libmachine: Using API Version  1
	I0316 00:36:50.447881  129541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:36:50.448193  129541 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:36:50.448379  129541 main.go:141] libmachine: (auto-869135) Calling .GetMachineName
	I0316 00:36:50.448551  129541 main.go:141] libmachine: (auto-869135) Calling .DriverName
	I0316 00:36:50.448723  129541 start.go:159] libmachine.API.Create for "auto-869135" (driver="kvm2")
	I0316 00:36:50.448761  129541 client.go:168] LocalClient.Create starting
	I0316 00:36:50.448792  129541 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem
	I0316 00:36:50.448835  129541 main.go:141] libmachine: Decoding PEM data...
	I0316 00:36:50.448854  129541 main.go:141] libmachine: Parsing certificate...
	I0316 00:36:50.448922  129541 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem
	I0316 00:36:50.448950  129541 main.go:141] libmachine: Decoding PEM data...
	I0316 00:36:50.448963  129541 main.go:141] libmachine: Parsing certificate...
	I0316 00:36:50.449003  129541 main.go:141] libmachine: Running pre-create checks...
	I0316 00:36:50.449014  129541 main.go:141] libmachine: (auto-869135) Calling .PreCreateCheck
	I0316 00:36:50.449387  129541 main.go:141] libmachine: (auto-869135) Calling .GetConfigRaw
	I0316 00:36:50.449873  129541 main.go:141] libmachine: Creating machine...
	I0316 00:36:50.449891  129541 main.go:141] libmachine: (auto-869135) Calling .Create
	I0316 00:36:50.450030  129541 main.go:141] libmachine: (auto-869135) Creating KVM machine...
	I0316 00:36:50.451226  129541 main.go:141] libmachine: (auto-869135) DBG | found existing default KVM network
	I0316 00:36:50.453031  129541 main.go:141] libmachine: (auto-869135) DBG | I0316 00:36:50.452854  129691 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:f1:aa:c0} reservation:<nil>}
	I0316 00:36:50.454465  129541 main.go:141] libmachine: (auto-869135) DBG | I0316 00:36:50.454384  129691 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000280a30}
	I0316 00:36:50.460327  129541 main.go:141] libmachine: (auto-869135) DBG | trying to create private KVM network mk-auto-869135 192.168.50.0/24...
	I0316 00:36:50.532640  129541 main.go:141] libmachine: (auto-869135) DBG | private KVM network mk-auto-869135 192.168.50.0/24 created
	I0316 00:36:50.532689  129541 main.go:141] libmachine: (auto-869135) DBG | I0316 00:36:50.532595  129691 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:36:50.532702  129541 main.go:141] libmachine: (auto-869135) Setting up store path in /home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135 ...
	I0316 00:36:50.532728  129541 main.go:141] libmachine: (auto-869135) Building disk image from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0316 00:36:50.532749  129541 main.go:141] libmachine: (auto-869135) Downloading /home/jenkins/minikube-integration/17991-75602/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso...
	I0316 00:36:50.785586  129541 main.go:141] libmachine: (auto-869135) DBG | I0316 00:36:50.785423  129691 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135/id_rsa...
	I0316 00:36:51.033139  129541 main.go:141] libmachine: (auto-869135) DBG | I0316 00:36:51.032985  129691 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135/auto-869135.rawdisk...
	I0316 00:36:51.033172  129541 main.go:141] libmachine: (auto-869135) DBG | Writing magic tar header
	I0316 00:36:51.033183  129541 main.go:141] libmachine: (auto-869135) DBG | Writing SSH key tar header
	I0316 00:36:51.033191  129541 main.go:141] libmachine: (auto-869135) DBG | I0316 00:36:51.033110  129691 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135 ...
	I0316 00:36:51.033259  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135
	I0316 00:36:51.033293  129541 main.go:141] libmachine: (auto-869135) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135 (perms=drwx------)
	I0316 00:36:51.033300  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube/machines
	I0316 00:36:51.033318  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:36:51.033329  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17991-75602
	I0316 00:36:51.033341  129541 main.go:141] libmachine: (auto-869135) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube/machines (perms=drwxr-xr-x)
	I0316 00:36:51.033356  129541 main.go:141] libmachine: (auto-869135) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602/.minikube (perms=drwxr-xr-x)
	I0316 00:36:51.033366  129541 main.go:141] libmachine: (auto-869135) Setting executable bit set on /home/jenkins/minikube-integration/17991-75602 (perms=drwxrwxr-x)
	I0316 00:36:51.033373  129541 main.go:141] libmachine: (auto-869135) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0316 00:36:51.033382  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0316 00:36:51.033390  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home/jenkins
	I0316 00:36:51.033398  129541 main.go:141] libmachine: (auto-869135) DBG | Checking permissions on dir: /home
	I0316 00:36:51.033412  129541 main.go:141] libmachine: (auto-869135) DBG | Skipping /home - not owner
	I0316 00:36:51.033425  129541 main.go:141] libmachine: (auto-869135) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0316 00:36:51.033437  129541 main.go:141] libmachine: (auto-869135) Creating domain...
	I0316 00:36:51.034448  129541 main.go:141] libmachine: (auto-869135) define libvirt domain using xml: 
	I0316 00:36:51.034471  129541 main.go:141] libmachine: (auto-869135) <domain type='kvm'>
	I0316 00:36:51.034478  129541 main.go:141] libmachine: (auto-869135)   <name>auto-869135</name>
	I0316 00:36:51.034485  129541 main.go:141] libmachine: (auto-869135)   <memory unit='MiB'>3072</memory>
	I0316 00:36:51.034512  129541 main.go:141] libmachine: (auto-869135)   <vcpu>2</vcpu>
	I0316 00:36:51.034529  129541 main.go:141] libmachine: (auto-869135)   <features>
	I0316 00:36:51.034546  129541 main.go:141] libmachine: (auto-869135)     <acpi/>
	I0316 00:36:51.034557  129541 main.go:141] libmachine: (auto-869135)     <apic/>
	I0316 00:36:51.034568  129541 main.go:141] libmachine: (auto-869135)     <pae/>
	I0316 00:36:51.034579  129541 main.go:141] libmachine: (auto-869135)     
	I0316 00:36:51.034585  129541 main.go:141] libmachine: (auto-869135)   </features>
	I0316 00:36:51.034596  129541 main.go:141] libmachine: (auto-869135)   <cpu mode='host-passthrough'>
	I0316 00:36:51.034600  129541 main.go:141] libmachine: (auto-869135)   
	I0316 00:36:51.034605  129541 main.go:141] libmachine: (auto-869135)   </cpu>
	I0316 00:36:51.034623  129541 main.go:141] libmachine: (auto-869135)   <os>
	I0316 00:36:51.034631  129541 main.go:141] libmachine: (auto-869135)     <type>hvm</type>
	I0316 00:36:51.034636  129541 main.go:141] libmachine: (auto-869135)     <boot dev='cdrom'/>
	I0316 00:36:51.034653  129541 main.go:141] libmachine: (auto-869135)     <boot dev='hd'/>
	I0316 00:36:51.034661  129541 main.go:141] libmachine: (auto-869135)     <bootmenu enable='no'/>
	I0316 00:36:51.034667  129541 main.go:141] libmachine: (auto-869135)   </os>
	I0316 00:36:51.034674  129541 main.go:141] libmachine: (auto-869135)   <devices>
	I0316 00:36:51.034703  129541 main.go:141] libmachine: (auto-869135)     <disk type='file' device='cdrom'>
	I0316 00:36:51.034720  129541 main.go:141] libmachine: (auto-869135)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135/boot2docker.iso'/>
	I0316 00:36:51.034783  129541 main.go:141] libmachine: (auto-869135)       <target dev='hdc' bus='scsi'/>
	I0316 00:36:51.034810  129541 main.go:141] libmachine: (auto-869135)       <readonly/>
	I0316 00:36:51.034819  129541 main.go:141] libmachine: (auto-869135)     </disk>
	I0316 00:36:51.034835  129541 main.go:141] libmachine: (auto-869135)     <disk type='file' device='disk'>
	I0316 00:36:51.034887  129541 main.go:141] libmachine: (auto-869135)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0316 00:36:51.034911  129541 main.go:141] libmachine: (auto-869135)       <source file='/home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135/auto-869135.rawdisk'/>
	I0316 00:36:51.034920  129541 main.go:141] libmachine: (auto-869135)       <target dev='hda' bus='virtio'/>
	I0316 00:36:51.034936  129541 main.go:141] libmachine: (auto-869135)     </disk>
	I0316 00:36:51.034961  129541 main.go:141] libmachine: (auto-869135)     <interface type='network'>
	I0316 00:36:51.034974  129541 main.go:141] libmachine: (auto-869135)       <source network='mk-auto-869135'/>
	I0316 00:36:51.034982  129541 main.go:141] libmachine: (auto-869135)       <model type='virtio'/>
	I0316 00:36:51.034988  129541 main.go:141] libmachine: (auto-869135)     </interface>
	I0316 00:36:51.035009  129541 main.go:141] libmachine: (auto-869135)     <interface type='network'>
	I0316 00:36:51.035039  129541 main.go:141] libmachine: (auto-869135)       <source network='default'/>
	I0316 00:36:51.035056  129541 main.go:141] libmachine: (auto-869135)       <model type='virtio'/>
	I0316 00:36:51.035079  129541 main.go:141] libmachine: (auto-869135)     </interface>
	I0316 00:36:51.035098  129541 main.go:141] libmachine: (auto-869135)     <serial type='pty'>
	I0316 00:36:51.035111  129541 main.go:141] libmachine: (auto-869135)       <target port='0'/>
	I0316 00:36:51.035118  129541 main.go:141] libmachine: (auto-869135)     </serial>
	I0316 00:36:51.035127  129541 main.go:141] libmachine: (auto-869135)     <console type='pty'>
	I0316 00:36:51.035139  129541 main.go:141] libmachine: (auto-869135)       <target type='serial' port='0'/>
	I0316 00:36:51.035150  129541 main.go:141] libmachine: (auto-869135)     </console>
	I0316 00:36:51.035159  129541 main.go:141] libmachine: (auto-869135)     <rng model='virtio'>
	I0316 00:36:51.035168  129541 main.go:141] libmachine: (auto-869135)       <backend model='random'>/dev/random</backend>
	I0316 00:36:51.035178  129541 main.go:141] libmachine: (auto-869135)     </rng>
	I0316 00:36:51.035185  129541 main.go:141] libmachine: (auto-869135)     
	I0316 00:36:51.035193  129541 main.go:141] libmachine: (auto-869135)     
	I0316 00:36:51.035201  129541 main.go:141] libmachine: (auto-869135)   </devices>
	I0316 00:36:51.035214  129541 main.go:141] libmachine: (auto-869135) </domain>
	I0316 00:36:51.035233  129541 main.go:141] libmachine: (auto-869135) 
	I0316 00:36:51.039334  129541 main.go:141] libmachine: (auto-869135) DBG | domain auto-869135 has defined MAC address 52:54:00:20:42:95 in network default
	I0316 00:36:51.039845  129541 main.go:141] libmachine: (auto-869135) Ensuring networks are active...
	I0316 00:36:51.039877  129541 main.go:141] libmachine: (auto-869135) DBG | domain auto-869135 has defined MAC address 52:54:00:71:8d:ff in network mk-auto-869135
	I0316 00:36:51.040656  129541 main.go:141] libmachine: (auto-869135) Ensuring network default is active
	I0316 00:36:51.041003  129541 main.go:141] libmachine: (auto-869135) Ensuring network mk-auto-869135 is active
	I0316 00:36:51.041451  129541 main.go:141] libmachine: (auto-869135) Getting domain xml...
	I0316 00:36:51.042164  129541 main.go:141] libmachine: (auto-869135) Creating domain...
	I0316 00:36:48.777657  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:48.778146  129288 main.go:141] libmachine: (newest-cni-143629) Found IP for machine: 192.168.39.122
	I0316 00:36:48.778163  129288 main.go:141] libmachine: (newest-cni-143629) Reserving static IP address...
	I0316 00:36:48.778185  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has current primary IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:48.778569  129288 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find host DHCP lease matching {name: "newest-cni-143629", mac: "52:54:00:8b:6b:4d", ip: "192.168.39.122"} in network mk-newest-cni-143629
	I0316 00:36:48.855927  129288 main.go:141] libmachine: (newest-cni-143629) Reserved static IP address: 192.168.39.122
	I0316 00:36:48.855992  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Getting to WaitForSSH function...
	I0316 00:36:48.856002  129288 main.go:141] libmachine: (newest-cni-143629) Waiting for SSH to be available...
	I0316 00:36:48.859573  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:48.860000  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:minikube Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:48.860035  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:48.860177  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Using SSH client type: external
	I0316 00:36:48.860207  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa (-rw-------)
	I0316 00:36:48.860266  129288 main.go:141] libmachine: (newest-cni-143629) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:36:48.860288  129288 main.go:141] libmachine: (newest-cni-143629) DBG | About to run SSH command:
	I0316 00:36:48.860322  129288 main.go:141] libmachine: (newest-cni-143629) DBG | exit 0
	I0316 00:36:48.987272  129288 main.go:141] libmachine: (newest-cni-143629) DBG | SSH cmd err, output: <nil>: 
	I0316 00:36:48.987593  129288 main.go:141] libmachine: (newest-cni-143629) KVM machine creation complete!
	I0316 00:36:48.987953  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetConfigRaw
	I0316 00:36:48.988531  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:48.988775  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:48.988987  129288 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0316 00:36:48.989006  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetState
	I0316 00:36:48.990413  129288 main.go:141] libmachine: Detecting operating system of created instance...
	I0316 00:36:48.990440  129288 main.go:141] libmachine: Waiting for SSH to be available...
	I0316 00:36:48.990448  129288 main.go:141] libmachine: Getting to WaitForSSH function...
	I0316 00:36:48.990457  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:48.992866  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:48.993245  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:48.993265  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:48.993450  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:48.993697  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:48.993897  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:48.994066  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:48.994260  129288 main.go:141] libmachine: Using SSH client type: native
	I0316 00:36:48.994463  129288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:36:48.994477  129288 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0316 00:36:49.106751  129288 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:36:49.106799  129288 main.go:141] libmachine: Detecting the provisioner...
	I0316 00:36:49.106808  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:49.109448  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.109871  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.109902  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.110025  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:49.110202  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.110391  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.110523  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:49.110701  129288 main.go:141] libmachine: Using SSH client type: native
	I0316 00:36:49.110919  129288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:36:49.110932  129288 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0316 00:36:49.224330  129288 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0316 00:36:49.224390  129288 main.go:141] libmachine: found compatible host: buildroot
	I0316 00:36:49.224400  129288 main.go:141] libmachine: Provisioning with buildroot...
	I0316 00:36:49.224417  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:36:49.224688  129288 buildroot.go:166] provisioning hostname "newest-cni-143629"
	I0316 00:36:49.224722  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:36:49.224897  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:49.227632  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.228057  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.228092  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.228309  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:49.228508  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.228667  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.228852  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:49.229031  129288 main.go:141] libmachine: Using SSH client type: native
	I0316 00:36:49.229236  129288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:36:49.229256  129288 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-143629 && echo "newest-cni-143629" | sudo tee /etc/hostname
	I0316 00:36:49.358948  129288 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-143629
	
	I0316 00:36:49.358984  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:49.361808  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.362081  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.362121  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.362262  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:49.362471  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.362669  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.362799  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:49.362969  129288 main.go:141] libmachine: Using SSH client type: native
	I0316 00:36:49.363192  129288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:36:49.363210  129288 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-143629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-143629/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-143629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:36:49.484940  129288 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:36:49.484969  129288 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:36:49.485000  129288 buildroot.go:174] setting up certificates
	I0316 00:36:49.485014  129288 provision.go:84] configureAuth start
	I0316 00:36:49.485025  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:36:49.485350  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:36:49.487985  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.488351  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.488372  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.488502  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:49.490789  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.491147  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.491169  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.491307  129288 provision.go:143] copyHostCerts
	I0316 00:36:49.491385  129288 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:36:49.491396  129288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:36:49.491458  129288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:36:49.491552  129288 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:36:49.491560  129288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:36:49.491585  129288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:36:49.491657  129288 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:36:49.491665  129288 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:36:49.491685  129288 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:36:49.491740  129288 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.newest-cni-143629 san=[127.0.0.1 192.168.39.122 localhost minikube newest-cni-143629]
	I0316 00:36:49.718025  129288 provision.go:177] copyRemoteCerts
	I0316 00:36:49.718095  129288 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:36:49.718120  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:49.720722  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.721020  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.721047  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.721252  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:49.721462  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.721632  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:49.721785  129288 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:36:49.810178  129288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:36:49.835446  129288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:36:49.862107  129288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:36:49.887998  129288 provision.go:87] duration metric: took 402.967134ms to configureAuth
	I0316 00:36:49.888029  129288 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:36:49.888234  129288 config.go:182] Loaded profile config "newest-cni-143629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:36:49.888329  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:49.891133  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.891487  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:49.891515  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:49.891733  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:49.891942  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.892114  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:49.892259  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:49.892409  129288 main.go:141] libmachine: Using SSH client type: native
	I0316 00:36:49.892599  129288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:36:49.892618  129288 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:36:50.168604  129288 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:36:50.168652  129288 main.go:141] libmachine: Checking connection to Docker...
	I0316 00:36:50.168661  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetURL
	I0316 00:36:50.170041  129288 main.go:141] libmachine: (newest-cni-143629) DBG | Using libvirt version 6000000
	I0316 00:36:50.172238  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.172619  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.172649  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.172797  129288 main.go:141] libmachine: Docker is up and running!
	I0316 00:36:50.172815  129288 main.go:141] libmachine: Reticulating splines...
	I0316 00:36:50.172824  129288 client.go:171] duration metric: took 26.400189937s to LocalClient.Create
	I0316 00:36:50.172845  129288 start.go:167] duration metric: took 26.40026007s to libmachine.API.Create "newest-cni-143629"
	I0316 00:36:50.172855  129288 start.go:293] postStartSetup for "newest-cni-143629" (driver="kvm2")
	I0316 00:36:50.172868  129288 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:36:50.172885  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:50.173102  129288 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:36:50.173126  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:50.175251  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.175611  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.175644  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.175839  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:50.176046  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:50.176174  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:50.176368  129288 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:36:50.262990  129288 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:36:50.267569  129288 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:36:50.267614  129288 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:36:50.267686  129288 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:36:50.267783  129288 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:36:50.267888  129288 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:36:50.277349  129288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:36:50.302223  129288 start.go:296] duration metric: took 129.35417ms for postStartSetup
	I0316 00:36:50.302287  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetConfigRaw
	I0316 00:36:50.302898  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:36:50.305422  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.305668  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.305695  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.305983  129288 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/config.json ...
	I0316 00:36:50.306225  129288 start.go:128] duration metric: took 26.554780247s to createHost
	I0316 00:36:50.306257  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:50.308365  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.308687  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.308718  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.308837  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:50.309019  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:50.309207  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:50.309332  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:50.309458  129288 main.go:141] libmachine: Using SSH client type: native
	I0316 00:36:50.309642  129288 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:36:50.309652  129288 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:36:50.424251  129288 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710549410.398241386
	
	I0316 00:36:50.424278  129288 fix.go:216] guest clock: 1710549410.398241386
	I0316 00:36:50.424288  129288 fix.go:229] Guest: 2024-03-16 00:36:50.398241386 +0000 UTC Remote: 2024-03-16 00:36:50.306240435 +0000 UTC m=+26.688705181 (delta=92.000951ms)
	I0316 00:36:50.424332  129288 fix.go:200] guest clock delta is within tolerance: 92.000951ms
	I0316 00:36:50.424343  129288 start.go:83] releasing machines lock for "newest-cni-143629", held for 26.673054389s
	I0316 00:36:50.424375  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:50.424685  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:36:50.427578  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.427941  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.427968  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.428134  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:50.428675  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:50.428876  129288 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:36:50.429045  129288 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:36:50.429091  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:50.429121  129288 ssh_runner.go:195] Run: cat /version.json
	I0316 00:36:50.429152  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:36:50.431632  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.432077  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.432111  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.432134  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.432336  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:50.432453  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:50.432488  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:50.432553  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:50.432594  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:36:50.432698  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:50.432766  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:36:50.432913  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:36:50.432908  129288 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:36:50.433059  129288 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:36:50.516738  129288 ssh_runner.go:195] Run: systemctl --version
	I0316 00:36:50.546695  129288 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:36:50.716041  129288 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:36:50.723428  129288 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:36:50.723519  129288 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:36:50.741718  129288 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:36:50.741745  129288 start.go:494] detecting cgroup driver to use...
	I0316 00:36:50.741818  129288 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:36:50.759819  129288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:36:50.775530  129288 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:36:50.775620  129288 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:36:50.791542  129288 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:36:50.806570  129288 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:36:50.929771  129288 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:36:51.093699  129288 docker.go:233] disabling docker service ...
	I0316 00:36:51.093789  129288 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:36:51.109978  129288 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:36:51.124071  129288 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:36:51.247739  129288 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:36:51.363843  129288 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:36:51.379182  129288 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:36:51.398585  129288 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:36:51.398648  129288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:36:51.409360  129288 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:36:51.409417  129288 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:36:51.419890  129288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:36:51.430611  129288 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:36:51.440762  129288 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:36:51.451703  129288 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:36:51.461457  129288 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:36:51.461520  129288 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:36:51.475467  129288 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:36:51.485885  129288 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:36:51.611043  129288 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:36:51.767045  129288 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:36:51.767136  129288 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:36:51.773178  129288 start.go:562] Will wait 60s for crictl version
	I0316 00:36:51.773254  129288 ssh_runner.go:195] Run: which crictl
	I0316 00:36:51.777108  129288 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:36:51.814286  129288 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:36:51.814386  129288 ssh_runner.go:195] Run: crio --version
	I0316 00:36:51.844679  129288 ssh_runner.go:195] Run: crio --version
	I0316 00:36:51.888442  129288 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:36:51.889847  129288 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:36:51.893317  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:51.893907  129288 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:36:39 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:36:51.893939  129288 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:36:51.894201  129288 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:36:51.902806  129288 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:36:51.918557  129288 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0316 00:36:51.919864  129288 kubeadm.go:877] updating cluster {Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:36:51.920012  129288 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:36:51.920069  129288 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:36:51.958004  129288 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:36:51.958080  129288 ssh_runner.go:195] Run: which lz4
	I0316 00:36:51.962734  129288 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:36:51.967241  129288 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:36:51.967276  129288 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0316 00:36:53.576538  129288 crio.go:444] duration metric: took 1.613837413s to copy over tarball
	I0316 00:36:53.576627  129288 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	
	
	==> CRI-O <==
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.662687895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549415662663157,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bd0170df-b0b0-492e-b639-f6d2d6c1878e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.663228262Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8983a3c2-132c-4dd2-a4f6-4d103b8f78a1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.663278365Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8983a3c2-132c-4dd2-a4f6-4d103b8f78a1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.663589100Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8983a3c2-132c-4dd2-a4f6-4d103b8f78a1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.721412249Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=754db4be-ba5b-42b9-a4c6-715074430724 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.721531586Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=754db4be-ba5b-42b9-a4c6-715074430724 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.723588485Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=776ecfaf-fa24-43b9-986d-720a37fe49f2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.724216398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549415724183669,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=776ecfaf-fa24-43b9-986d-720a37fe49f2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.725041915Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b188886b-b29a-4a3f-9edd-94e8c9e082fc name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.725095656Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b188886b-b29a-4a3f-9edd-94e8c9e082fc name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.725278481Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b188886b-b29a-4a3f-9edd-94e8c9e082fc name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.789320137Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f9e39fa-f7a3-4adf-b05f-97e283548de5 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.789429442Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f9e39fa-f7a3-4adf-b05f-97e283548de5 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.791212492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b959ccf-a5e3-41b0-a5b7-a15a9db40d44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.791818743Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549415791793429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b959ccf-a5e3-41b0-a5b7-a15a9db40d44 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.792705396Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62e08578-4f83-474d-995b-35f0743e3bdc name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.792777587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62e08578-4f83-474d-995b-35f0743e3bdc name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.793036868Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=62e08578-4f83-474d-995b-35f0743e3bdc name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.842720420Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a46f3267-0d4a-474a-bcc8-cd7f62adde29 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.842850820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a46f3267-0d4a-474a-bcc8-cd7f62adde29 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.844240624Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ba560e4f-e192-4b54-a80a-91ba9fb4ecb8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.844998379Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549415844963599,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ba560e4f-e192-4b54-a80a-91ba9fb4ecb8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.845743265Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=eb07a6a5-cda2-4057-9ca6-cab7885f5466 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.845822670Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=eb07a6a5-cda2-4057-9ca6-cab7885f5466 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:55 embed-certs-666637 crio[689]: time="2024-03-16 00:36:55.846160926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548239045189917,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4aff39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 2,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d587ebe2db0fa4079e4d9d6521d24397104190fc2d707f95b22036cc5fc68f08,PodSandboxId:44db51216f85d071797b6eb6276f9dada422928ba67b0d083a3839e753b8c99c,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548217665127622,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: b494922e-2643-471f-bcda-1510733942e8,},Annotations:map[string]string{io.kubernetes.container.hash: 37644a16,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminatio
nMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b,PodSandboxId:491918599a8b4570f11f0f5c4216acba680e024b4203469636213f294d07e609,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548215896611330,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-t8xb4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9feb9bc-2a4a-402b-9753-f2f84702db9c,},Annotations:map[string]string{io.kubernetes.container.hash: b0295188,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"
dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c,PodSandboxId:e51471be8a4d8770daf40bdbeb2a3f7b22aba3e122dacf8566e0562c6b0d3890,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548208286685997,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8fpc5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0d4bdc4-4f17-4b6a-8
958-cecd1884016e,},Annotations:map[string]string{io.kubernetes.container.hash: 1056ce8f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65,PodSandboxId:13e45a234e609bc74e45c6b973015a76c83e0d2322cdf8886e754c3a3e1017dc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548208246241032,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d503e849-8714-402d-aeef-26cd0f4af
f39,},Annotations:map[string]string{io.kubernetes.container.hash: 31f2bf9a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977,PodSandboxId:bed149cfa5222ea9f5646035a62673d7ca4e4d1d70329f4c3bb7bdcaf5f58d22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548203541510930,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b35f67b5d7b32782627020932ee59d3,},Annota
tions:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613,PodSandboxId:b52f4be6b8a49cf663ec44278f5d8ea7b07438899d44f19b78c730ba2daa35b9,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548203515640386,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e1d485739ac46b8bf5f2eddb92efc69d,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 795ca7c8,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2,PodSandboxId:5f1799ffd005b03e2bf07fce31f79457fe6fb1f55809f0e2de61324162e8c050,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548203446346216,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 359f13fb608a64bccba28eae61bdee13,},Annotations:map[string]string{io.kubernetes.container.hash:
4e820126,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c,PodSandboxId:262819213ca04d5070bc9884ced63842973f5b890247da53f61106c31f7c6f9d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548203477082837,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-666637,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8284e42cc130cc7e3b8b526d35eab878,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=eb07a6a5-cda2-4057-9ca6-cab7885f5466 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	413fba3fe664b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      19 minutes ago      Running             storage-provisioner       2                   13e45a234e609       storage-provisioner
	d587ebe2db0fa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Running             busybox                   1                   44db51216f85d       busybox
	4e6f75410b4de       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      20 minutes ago      Running             coredns                   1                   491918599a8b4       coredns-5dd5756b68-t8xb4
	0947f6f374016       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      20 minutes ago      Running             kube-proxy                1                   e51471be8a4d8       kube-proxy-8fpc5
	ea3eb17a8a72d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      20 minutes ago      Exited              storage-provisioner       1                   13e45a234e609       storage-provisioner
	4909a6f121b0c       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      20 minutes ago      Running             kube-scheduler            1                   bed149cfa5222       kube-scheduler-embed-certs-666637
	229fef1811744       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      20 minutes ago      Running             etcd                      1                   b52f4be6b8a49       etcd-embed-certs-666637
	9041a3c9211cc       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      20 minutes ago      Running             kube-controller-manager   1                   262819213ca04       kube-controller-manager-embed-certs-666637
	81025ff5aef08       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      20 minutes ago      Running             kube-apiserver            1                   5f1799ffd005b       kube-apiserver-embed-certs-666637
	
	
	==> coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50814 - 16051 "HINFO IN 7249487384717712784.1440845366661011137. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022155485s
	
	
	==> describe nodes <==
	Name:               embed-certs-666637
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-666637
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=embed-certs-666637
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_08_08_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:08:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-666637
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:36:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:32:37 +0000   Sat, 16 Mar 2024 00:08:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:32:37 +0000   Sat, 16 Mar 2024 00:08:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:32:37 +0000   Sat, 16 Mar 2024 00:08:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:32:37 +0000   Sat, 16 Mar 2024 00:16:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.91
	  Hostname:    embed-certs-666637
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 e251e5a35f1548d18587fe3724a1b0f6
	  System UUID:                e251e5a3-5f15-48d1-8587-fe3724a1b0f6
	  Boot ID:                    78240f16-f223-4c62-a053-d4b16932ca9a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-5dd5756b68-t8xb4                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-embed-certs-666637                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-embed-certs-666637             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-embed-certs-666637    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-8fpc5                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-embed-certs-666637             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-bfnwf               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node embed-certs-666637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node embed-certs-666637 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node embed-certs-666637 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node embed-certs-666637 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           28m                node-controller  Node embed-certs-666637 event: Registered Node embed-certs-666637 in Controller
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-666637 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-666637 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-666637 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-666637 event: Registered Node embed-certs-666637 in Controller
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051908] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040879] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.487567] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.790399] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.461323] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.510268] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.058399] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067639] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.190466] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +0.139213] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.244320] systemd-fstab-generator[672]: Ignoring "noauto" option for root device
	[  +5.034021] systemd-fstab-generator[772]: Ignoring "noauto" option for root device
	[  +0.061079] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.752537] systemd-fstab-generator[896]: Ignoring "noauto" option for root device
	[  +5.606386] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.005541] systemd-fstab-generator[1500]: Ignoring "noauto" option for root device
	[  +3.661010] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.728664] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] <==
	{"level":"info","ts":"2024-03-16T00:17:00.941366Z","caller":"traceutil/trace.go:171","msg":"trace[1511233561] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-666637; range_end:; response_count:1; response_revision:598; }","duration":"573.522395ms","start":"2024-03-16T00:17:00.367839Z","end":"2024-03-16T00:17:00.941361Z","steps":["trace[1511233561] 'agreement among raft nodes before linearized reading'  (duration: 573.475211ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:00.941385Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:00.367825Z","time spent":"573.555369ms","remote":"127.0.0.1:51954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":5386,"request content":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" "}
	{"level":"info","ts":"2024-03-16T00:17:00.941436Z","caller":"traceutil/trace.go:171","msg":"trace[664336527] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"726.299656ms","start":"2024-03-16T00:17:00.21513Z","end":"2024-03-16T00:17:00.94143Z","steps":["trace[664336527] 'process raft request'  (duration: 602.598693ms)","trace[664336527] 'compare'  (duration: 122.710935ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:17:00.941576Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:00.215119Z","time spent":"726.426604ms","remote":"127.0.0.1:51954","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":6075,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" mod_revision:597 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" value_size:5998 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-embed-certs-666637\" > >"}
	{"level":"warn","ts":"2024-03-16T00:17:01.985954Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"618.258149ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" ","response":"range_response_count:1 size:5363"}
	{"level":"info","ts":"2024-03-16T00:17:01.986025Z","caller":"traceutil/trace.go:171","msg":"trace[678499935] range","detail":"{range_begin:/registry/pods/kube-system/etcd-embed-certs-666637; range_end:; response_count:1; response_revision:598; }","duration":"618.336629ms","start":"2024-03-16T00:17:01.367677Z","end":"2024-03-16T00:17:01.986014Z","steps":["trace[678499935] 'range keys from in-memory index tree'  (duration: 618.177763ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:01.986059Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:01.367663Z","time spent":"618.384302ms","remote":"127.0.0.1:51954","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":1,"response size":5386,"request content":"key:\"/registry/pods/kube-system/etcd-embed-certs-666637\" "}
	{"level":"info","ts":"2024-03-16T00:17:20.272748Z","caller":"traceutil/trace.go:171","msg":"trace[1937797935] linearizableReadLoop","detail":"{readStateIndex:661; appliedIndex:660; }","duration":"172.917714ms","start":"2024-03-16T00:17:20.099808Z","end":"2024-03-16T00:17:20.272726Z","steps":["trace[1937797935] 'read index received'  (duration: 172.573172ms)","trace[1937797935] 'applied index is now lower than readState.Index'  (duration: 343.738µs)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:17:20.27289Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"173.080165ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf\" ","response":"range_response_count:1 size:4026"}
	{"level":"info","ts":"2024-03-16T00:17:20.272913Z","caller":"traceutil/trace.go:171","msg":"trace[1764960300] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf; range_end:; response_count:1; response_revision:615; }","duration":"173.130296ms","start":"2024-03-16T00:17:20.099776Z","end":"2024-03-16T00:17:20.272907Z","steps":["trace[1764960300] 'agreement among raft nodes before linearized reading'  (duration: 173.033008ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:20.273099Z","caller":"traceutil/trace.go:171","msg":"trace[584271795] transaction","detail":"{read_only:false; response_revision:615; number_of_response:1; }","duration":"229.069778ms","start":"2024-03-16T00:17:20.043979Z","end":"2024-03-16T00:17:20.273048Z","steps":["trace[584271795] 'process raft request'  (duration: 228.545251ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:20.789545Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"189.552539ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf\" ","response":"range_response_count:1 size:4026"}
	{"level":"info","ts":"2024-03-16T00:17:20.789688Z","caller":"traceutil/trace.go:171","msg":"trace[860486802] range","detail":"{range_begin:/registry/pods/kube-system/metrics-server-57f55c9bc5-bfnwf; range_end:; response_count:1; response_revision:615; }","duration":"189.828696ms","start":"2024-03-16T00:17:20.599845Z","end":"2024-03-16T00:17:20.789674Z","steps":["trace[860486802] 'range keys from in-memory index tree'  (duration: 189.396447ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:22.100797Z","caller":"traceutil/trace.go:171","msg":"trace[1979416583] transaction","detail":"{read_only:false; response_revision:616; number_of_response:1; }","duration":"258.133652ms","start":"2024-03-16T00:17:21.842642Z","end":"2024-03-16T00:17:22.100776Z","steps":["trace[1979416583] 'process raft request'  (duration: 257.952982ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:22.107271Z","caller":"traceutil/trace.go:171","msg":"trace[159985000] transaction","detail":"{read_only:false; response_revision:617; number_of_response:1; }","duration":"263.966387ms","start":"2024-03-16T00:17:21.843288Z","end":"2024-03-16T00:17:22.107254Z","steps":["trace[159985000] 'process raft request'  (duration: 263.797546ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:17:22.227201Z","caller":"traceutil/trace.go:171","msg":"trace[498684736] transaction","detail":"{read_only:false; response_revision:618; number_of_response:1; }","duration":"109.225336ms","start":"2024-03-16T00:17:22.117954Z","end":"2024-03-16T00:17:22.227179Z","steps":["trace[498684736] 'process raft request'  (duration: 103.747843ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:26:45.614258Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":842}
	{"level":"info","ts":"2024-03-16T00:26:45.619267Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":842,"took":"3.100012ms","hash":1522911222}
	{"level":"info","ts":"2024-03-16T00:26:45.619551Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1522911222,"revision":842,"compact-revision":-1}
	{"level":"info","ts":"2024-03-16T00:31:45.623489Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1084}
	{"level":"info","ts":"2024-03-16T00:31:45.625821Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1084,"took":"1.632286ms","hash":2691428001}
	{"level":"info","ts":"2024-03-16T00:31:45.625956Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2691428001,"revision":1084,"compact-revision":842}
	{"level":"info","ts":"2024-03-16T00:36:45.630071Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1329}
	{"level":"info","ts":"2024-03-16T00:36:45.632082Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1329,"took":"1.678314ms","hash":4154033301}
	{"level":"info","ts":"2024-03-16T00:36:45.632145Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4154033301,"revision":1329,"compact-revision":1084}
	
	
	==> kernel <==
	 00:36:56 up 20 min,  0 users,  load average: 0.09, 0.13, 0.09
	Linux embed-certs-666637 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] <==
	E0316 00:32:48.000689       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:32:48.000715       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:33:46.885795       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0316 00:34:46.886300       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:34:48.000565       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:34:48.000640       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:34:48.000648       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:34:48.001761       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:34:48.001914       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:34:48.001949       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:35:46.886024       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0316 00:36:46.885879       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:36:47.004428       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:36:47.004625       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:36:47.005045       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:36:48.005825       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:36:48.005881       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:36:48.005889       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:36:48.006062       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:36:48.006242       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:36:48.007068       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] <==
	I0316 00:31:00.346892       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:31:29.839376       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:31:30.356751       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:31:59.844914       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:32:00.365182       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:32:29.850588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:32:30.374167       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:32:59.859436       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:33:00.383371       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:33:22.859305       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="294.662µs"
	E0316 00:33:29.866066       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:33:30.391597       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:33:34.862015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="210.914µs"
	E0316 00:33:59.871783       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:34:00.401131       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:34:29.879137       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:34:30.415664       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:34:59.891712       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:35:00.425855       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:35:29.898822       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:35:30.436624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:35:59.906123       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:36:00.445531       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:36:29.911648       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:36:30.454029       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] <==
	I0316 00:16:48.415438       1 server_others.go:69] "Using iptables proxy"
	I0316 00:16:48.425762       1 node.go:141] Successfully retrieved node IP: 192.168.61.91
	I0316 00:16:48.471226       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:16:48.471247       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:16:48.477097       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:16:48.477152       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:16:48.477359       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:16:48.477369       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:16:48.478028       1 config.go:188] "Starting service config controller"
	I0316 00:16:48.478044       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:16:48.478076       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:16:48.478079       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:16:48.478761       1 config.go:315] "Starting node config controller"
	I0316 00:16:48.478770       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:16:48.579558       1 shared_informer.go:318] Caches are synced for node config
	I0316 00:16:48.579590       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:16:48.579728       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] <==
	I0316 00:16:44.669899       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:16:46.966169       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:16:46.966811       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:16:46.966933       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:16:46.966961       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:16:46.991596       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:16:46.991683       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:16:46.993140       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:16:46.993244       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:16:47.003189       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:16:47.003232       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:16:47.093905       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:34:42 embed-certs-666637 kubelet[903]: E0316 00:34:42.853879     903 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:34:42 embed-certs-666637 kubelet[903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:34:42 embed-certs-666637 kubelet[903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:34:42 embed-certs-666637 kubelet[903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:34:42 embed-certs-666637 kubelet[903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:34:45 embed-certs-666637 kubelet[903]: E0316 00:34:45.833852     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:34:57 embed-certs-666637 kubelet[903]: E0316 00:34:57.833348     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:35:08 embed-certs-666637 kubelet[903]: E0316 00:35:08.834289     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:35:19 embed-certs-666637 kubelet[903]: E0316 00:35:19.833359     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:35:32 embed-certs-666637 kubelet[903]: E0316 00:35:32.836220     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:35:42 embed-certs-666637 kubelet[903]: E0316 00:35:42.852763     903 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:35:42 embed-certs-666637 kubelet[903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:35:42 embed-certs-666637 kubelet[903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:35:42 embed-certs-666637 kubelet[903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:35:42 embed-certs-666637 kubelet[903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:35:46 embed-certs-666637 kubelet[903]: E0316 00:35:46.833249     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:36:01 embed-certs-666637 kubelet[903]: E0316 00:36:01.833217     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:36:16 embed-certs-666637 kubelet[903]: E0316 00:36:16.833293     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:36:30 embed-certs-666637 kubelet[903]: E0316 00:36:30.835062     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	Mar 16 00:36:42 embed-certs-666637 kubelet[903]: E0316 00:36:42.852844     903 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:36:42 embed-certs-666637 kubelet[903]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:36:42 embed-certs-666637 kubelet[903]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:36:42 embed-certs-666637 kubelet[903]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:36:42 embed-certs-666637 kubelet[903]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:36:44 embed-certs-666637 kubelet[903]: E0316 00:36:44.836589     903 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-bfnwf" podUID="de35c1e5-3847-4a31-a31a-86aeed12252c"
	
	
	==> storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] <==
	I0316 00:17:19.161354       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:17:19.175015       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:17:19.175128       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:17:36.578324       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:17:36.578751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-666637_eb402c7e-4eec-4a68-8bd2-89381fd513f2!
	I0316 00:17:36.582960       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"70064dba-4c34-4434-8ff6-cae9b56858b1", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-666637_eb402c7e-4eec-4a68-8bd2-89381fd513f2 became leader
	I0316 00:17:36.679978       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-666637_eb402c7e-4eec-4a68-8bd2-89381fd513f2!
	
	
	==> storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] <==
	I0316 00:16:48.352111       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0316 00:17:18.354955       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-666637 -n embed-certs-666637
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-666637 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-bfnwf
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-666637 describe pod metrics-server-57f55c9bc5-bfnwf
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-666637 describe pod metrics-server-57f55c9bc5-bfnwf: exit status 1 (87.303767ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-bfnwf" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-666637 describe pod metrics-server-57f55c9bc5-bfnwf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (397.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (491.48s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-16 00:38:48.456334238 +0000 UTC m=+6158.879185509
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-313436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.427µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-313436 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-313436 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-313436 logs -n 25: (1.65547415s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile     |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC |                     |
	|         | status docker --all --full                           |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | cat docker --no-pager                                |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/docker/daemon.json                              |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo docker                           | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC |                     |
	|         | system info                                          |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC |                     |
	|         | status cri-docker --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | cat cri-docker --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo                                  | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | cri-dockerd --version                                |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC |                     |
	|         | status containerd --all --full                       |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | cat containerd --no-pager                            |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /lib/systemd/system/containerd.service               |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo cat                              | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/containerd/config.toml                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-869135 sudo cat                           | kindnet-869135 | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/nsswitch.conf                                   |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo containerd                       | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | config dump                                          |                |         |         |                     |                     |
	| ssh     | -p kindnet-869135 sudo cat                           | kindnet-869135 | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/hosts                                           |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | status crio --all --full                             |                |         |         |                     |                     |
	|         | --no-pager                                           |                |         |         |                     |                     |
	| ssh     | -p kindnet-869135 sudo cat                           | kindnet-869135 | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/resolv.conf                                     |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo systemctl                        | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | cat crio --no-pager                                  |                |         |         |                     |                     |
	| ssh     | -p kindnet-869135 sudo crictl                        | kindnet-869135 | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | pods                                                 |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo find                             | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                |         |         |                     |                     |
	| ssh     | -p auto-869135 sudo crio                             | auto-869135    | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | config                                               |                |         |         |                     |                     |
	| ssh     | -p kindnet-869135 sudo crictl                        | kindnet-869135 | jenkins | v1.32.0 | 16 Mar 24 00:38 UTC | 16 Mar 24 00:38 UTC |
	|         | ps --all                                             |                |         |         |                     |                     |
	|---------|------------------------------------------------------|----------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:37:34
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:37:34.751808  130439 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:37:34.752265  130439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:37:34.754957  130439 out.go:304] Setting ErrFile to fd 2...
	I0316 00:37:34.755013  130439 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:37:34.755635  130439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:37:34.756802  130439 out.go:298] Setting JSON to false
	I0316 00:37:34.757803  130439 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":12005,"bootTime":1710537450,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:37:34.757894  130439 start.go:139] virtualization: kvm guest
	I0316 00:37:34.759458  130439 out.go:177] * [newest-cni-143629] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:37:34.761033  130439 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:37:34.761154  130439 notify.go:220] Checking for updates...
	I0316 00:37:34.762396  130439 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:37:34.763810  130439 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:37:34.765607  130439 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:37:34.766849  130439 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:37:34.768056  130439 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:37:34.769677  130439 config.go:182] Loaded profile config "newest-cni-143629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:37:34.770111  130439 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:34.770169  130439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:34.785846  130439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37625
	I0316 00:37:34.786552  130439 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:34.787216  130439 main.go:141] libmachine: Using API Version  1
	I0316 00:37:34.787247  130439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:34.787950  130439 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:34.788225  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:37:34.788552  130439 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:37:34.788981  130439 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:34.789057  130439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:34.806037  130439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38351
	I0316 00:37:34.806510  130439 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:34.807003  130439 main.go:141] libmachine: Using API Version  1
	I0316 00:37:34.807030  130439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:34.807522  130439 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:34.807767  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:37:34.842013  130439 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:37:34.843394  130439 start.go:297] selected driver: kvm2
	I0316 00:37:34.843415  130439 start.go:901] validating driver "kvm2" against &{Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:37:34.843542  130439 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:37:34.844273  130439 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:37:34.844333  130439 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:37:34.858537  130439 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:37:34.859008  130439 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0316 00:37:34.859077  130439 cni.go:84] Creating CNI manager for ""
	I0316 00:37:34.859092  130439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:37:34.859137  130439 start.go:340] cluster config:
	{Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:37:34.859264  130439 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:37:34.861080  130439 out.go:177] * Starting "newest-cni-143629" primary control-plane node in "newest-cni-143629" cluster
	I0316 00:37:37.818991  129541 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0316 00:37:37.819070  129541 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:37:37.819178  129541 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:37:37.819355  129541 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:37:37.819508  129541 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:37:37.819618  129541 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:37:37.821254  129541 out.go:204]   - Generating certificates and keys ...
	I0316 00:37:37.821341  129541 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:37:37.821405  129541 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:37:37.821488  129541 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 00:37:37.821556  129541 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 00:37:37.821636  129541 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 00:37:37.821714  129541 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 00:37:37.821798  129541 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 00:37:37.821957  129541 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [auto-869135 localhost] and IPs [192.168.50.112 127.0.0.1 ::1]
	I0316 00:37:37.822020  129541 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 00:37:37.822217  129541 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [auto-869135 localhost] and IPs [192.168.50.112 127.0.0.1 ::1]
	I0316 00:37:37.822310  129541 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 00:37:37.822404  129541 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 00:37:37.822469  129541 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 00:37:37.822551  129541 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:37:37.822616  129541 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:37:37.822684  129541 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:37:37.822789  129541 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:37:37.822863  129541 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:37:37.822976  129541 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:37:37.823068  129541 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:37:37.824690  129541 out.go:204]   - Booting up control plane ...
	I0316 00:37:37.824778  129541 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:37:37.824849  129541 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:37:37.824905  129541 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:37:37.825040  129541 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:37:37.825171  129541 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:37:37.825224  129541 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:37:37.825424  129541 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:37:37.825546  129541 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.502041 seconds
	I0316 00:37:37.825687  129541 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:37:37.825841  129541 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:37:37.825928  129541 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:37:37.826182  129541 kubeadm.go:309] [mark-control-plane] Marking the node auto-869135 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:37:37.826261  129541 kubeadm.go:309] [bootstrap-token] Using token: njuqkw.tefs46cz4ph1vt9v
	I0316 00:37:37.827651  129541 out.go:204]   - Configuring RBAC rules ...
	I0316 00:37:37.827783  129541 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:37:37.827895  129541 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:37:37.828075  129541 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:37:37.828220  129541 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:37:37.828351  129541 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:37:37.828480  129541 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:37:37.828645  129541 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:37:37.828701  129541 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:37:37.828776  129541 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:37:37.828795  129541 kubeadm.go:309] 
	I0316 00:37:37.828882  129541 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:37:37.828891  129541 kubeadm.go:309] 
	I0316 00:37:37.828990  129541 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:37:37.828999  129541 kubeadm.go:309] 
	I0316 00:37:37.829044  129541 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:37:37.829122  129541 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:37:37.829200  129541 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:37:37.829212  129541 kubeadm.go:309] 
	I0316 00:37:37.829319  129541 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:37:37.829333  129541 kubeadm.go:309] 
	I0316 00:37:37.829410  129541 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:37:37.829418  129541 kubeadm.go:309] 
	I0316 00:37:37.829489  129541 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:37:37.829595  129541 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:37:37.829691  129541 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:37:37.829700  129541 kubeadm.go:309] 
	I0316 00:37:37.829797  129541 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:37:37.829905  129541 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:37:37.829921  129541 kubeadm.go:309] 
	I0316 00:37:37.830022  129541 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token njuqkw.tefs46cz4ph1vt9v \
	I0316 00:37:37.830148  129541 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:37:37.830183  129541 kubeadm.go:309] 	--control-plane 
	I0316 00:37:37.830197  129541 kubeadm.go:309] 
	I0316 00:37:37.830315  129541 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:37:37.830326  129541 kubeadm.go:309] 
	I0316 00:37:37.830439  129541 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token njuqkw.tefs46cz4ph1vt9v \
	I0316 00:37:37.830600  129541 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:37:37.830617  129541 cni.go:84] Creating CNI manager for ""
	I0316 00:37:37.830625  129541 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:37:37.832141  129541 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:37:36.107131  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:36.107689  130012 main.go:141] libmachine: (kindnet-869135) DBG | unable to find current IP address of domain kindnet-869135 in network mk-kindnet-869135
	I0316 00:37:36.107712  130012 main.go:141] libmachine: (kindnet-869135) DBG | I0316 00:37:36.107654  130149 retry.go:31] will retry after 5.322673105s: waiting for machine to come up
	I0316 00:37:34.862343  130439 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:37:34.862384  130439 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0316 00:37:34.862405  130439 cache.go:56] Caching tarball of preloaded images
	I0316 00:37:34.862471  130439 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:37:34.862486  130439 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0316 00:37:34.862593  130439 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/config.json ...
	I0316 00:37:34.862806  130439 start.go:360] acquireMachinesLock for newest-cni-143629: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:37:37.833414  129541 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:37:37.853650  129541 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:37:37.892357  129541 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:37:37.892523  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-869135 minikube.k8s.io/updated_at=2024_03_16T00_37_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=auto-869135 minikube.k8s.io/primary=true
	I0316 00:37:37.892528  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:37.958974  129541 ops.go:34] apiserver oom_adj: -16
	I0316 00:37:38.074877  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:38.575852  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:39.074902  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:39.575260  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:40.075360  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:40.575709  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:41.074970  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:42.896620  130439 start.go:364] duration metric: took 8.033783057s to acquireMachinesLock for "newest-cni-143629"
	I0316 00:37:42.896694  130439 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:37:42.896708  130439 fix.go:54] fixHost starting: 
	I0316 00:37:42.897113  130439 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:42.897166  130439 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:42.914124  130439 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35195
	I0316 00:37:42.914540  130439 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:42.915051  130439 main.go:141] libmachine: Using API Version  1
	I0316 00:37:42.915079  130439 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:42.915476  130439 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:42.915700  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:37:42.915867  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetState
	I0316 00:37:42.917591  130439 fix.go:112] recreateIfNeeded on newest-cni-143629: state=Stopped err=<nil>
	I0316 00:37:42.917638  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	W0316 00:37:42.917920  130439 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:37:42.920930  130439 out.go:177] * Restarting existing kvm2 VM for "newest-cni-143629" ...
	I0316 00:37:41.434425  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.435134  130012 main.go:141] libmachine: (kindnet-869135) Found IP for machine: 192.168.61.68
	I0316 00:37:41.435157  130012 main.go:141] libmachine: (kindnet-869135) Reserving static IP address...
	I0316 00:37:41.435171  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has current primary IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.435599  130012 main.go:141] libmachine: (kindnet-869135) DBG | unable to find host DHCP lease matching {name: "kindnet-869135", mac: "52:54:00:db:c2:ba", ip: "192.168.61.68"} in network mk-kindnet-869135
	I0316 00:37:41.512564  130012 main.go:141] libmachine: (kindnet-869135) DBG | Getting to WaitForSSH function...
	I0316 00:37:41.512599  130012 main.go:141] libmachine: (kindnet-869135) Reserved static IP address: 192.168.61.68
	I0316 00:37:41.512612  130012 main.go:141] libmachine: (kindnet-869135) Waiting for SSH to be available...
	I0316 00:37:41.515037  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.515561  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:minikube Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:41.515594  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.515736  130012 main.go:141] libmachine: (kindnet-869135) DBG | Using SSH client type: external
	I0316 00:37:41.515768  130012 main.go:141] libmachine: (kindnet-869135) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa (-rw-------)
	I0316 00:37:41.515821  130012 main.go:141] libmachine: (kindnet-869135) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.68 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:37:41.515847  130012 main.go:141] libmachine: (kindnet-869135) DBG | About to run SSH command:
	I0316 00:37:41.515865  130012 main.go:141] libmachine: (kindnet-869135) DBG | exit 0
	I0316 00:37:41.647610  130012 main.go:141] libmachine: (kindnet-869135) DBG | SSH cmd err, output: <nil>: 
	I0316 00:37:41.647816  130012 main.go:141] libmachine: (kindnet-869135) KVM machine creation complete!
	I0316 00:37:41.648217  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetConfigRaw
	I0316 00:37:41.648838  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:41.649043  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:41.649228  130012 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0316 00:37:41.649244  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetState
	I0316 00:37:41.650830  130012 main.go:141] libmachine: Detecting operating system of created instance...
	I0316 00:37:41.650846  130012 main.go:141] libmachine: Waiting for SSH to be available...
	I0316 00:37:41.650852  130012 main.go:141] libmachine: Getting to WaitForSSH function...
	I0316 00:37:41.650858  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:41.653399  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.653838  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:41.653861  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.653995  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:41.654188  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:41.654379  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:41.654547  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:41.654815  130012 main.go:141] libmachine: Using SSH client type: native
	I0316 00:37:41.655021  130012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I0316 00:37:41.655036  130012 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0316 00:37:41.762869  130012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:37:41.762897  130012 main.go:141] libmachine: Detecting the provisioner...
	I0316 00:37:41.762907  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:41.765992  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.766391  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:41.766430  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.766573  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:41.766793  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:41.766977  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:41.767125  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:41.767314  130012 main.go:141] libmachine: Using SSH client type: native
	I0316 00:37:41.767540  130012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I0316 00:37:41.767554  130012 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0316 00:37:41.876502  130012 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0316 00:37:41.876581  130012 main.go:141] libmachine: found compatible host: buildroot
	I0316 00:37:41.876592  130012 main.go:141] libmachine: Provisioning with buildroot...
	I0316 00:37:41.876606  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetMachineName
	I0316 00:37:41.876905  130012 buildroot.go:166] provisioning hostname "kindnet-869135"
	I0316 00:37:41.876933  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetMachineName
	I0316 00:37:41.877105  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:41.879518  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.879891  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:41.879936  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:41.880038  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:41.880210  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:41.880339  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:41.880462  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:41.880631  130012 main.go:141] libmachine: Using SSH client type: native
	I0316 00:37:41.880800  130012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I0316 00:37:41.880812  130012 main.go:141] libmachine: About to run SSH command:
	sudo hostname kindnet-869135 && echo "kindnet-869135" | sudo tee /etc/hostname
	I0316 00:37:42.008274  130012 main.go:141] libmachine: SSH cmd err, output: <nil>: kindnet-869135
	
	I0316 00:37:42.008309  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.011616  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.011994  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.012027  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.012222  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.012442  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.012639  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.012769  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.012939  130012 main.go:141] libmachine: Using SSH client type: native
	I0316 00:37:42.013163  130012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I0316 00:37:42.013183  130012 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skindnet-869135' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kindnet-869135/g' /etc/hosts;
				else 
					echo '127.0.1.1 kindnet-869135' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:37:42.133370  130012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:37:42.133405  130012 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:37:42.133463  130012 buildroot.go:174] setting up certificates
	I0316 00:37:42.133488  130012 provision.go:84] configureAuth start
	I0316 00:37:42.133507  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetMachineName
	I0316 00:37:42.133837  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetIP
	I0316 00:37:42.136711  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.137106  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.137136  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.137348  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.139926  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.140302  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.140331  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.140474  130012 provision.go:143] copyHostCerts
	I0316 00:37:42.140551  130012 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:37:42.140565  130012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:37:42.140635  130012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:37:42.140770  130012 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:37:42.140780  130012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:37:42.140817  130012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:37:42.140900  130012 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:37:42.140910  130012 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:37:42.140939  130012 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:37:42.141022  130012 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.kindnet-869135 san=[127.0.0.1 192.168.61.68 kindnet-869135 localhost minikube]
	I0316 00:37:42.181970  130012 provision.go:177] copyRemoteCerts
	I0316 00:37:42.182037  130012 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:37:42.182062  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.184827  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.185180  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.185212  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.185392  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.185605  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.185781  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.185932  130012 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa Username:docker}
	I0316 00:37:42.270754  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0316 00:37:42.297023  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:37:42.322554  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:37:42.348847  130012 provision.go:87] duration metric: took 215.340331ms to configureAuth
	I0316 00:37:42.348881  130012 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:37:42.349090  130012 config.go:182] Loaded profile config "kindnet-869135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:37:42.349183  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.351967  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.352337  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.352365  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.352551  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.352741  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.352928  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.353080  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.353245  130012 main.go:141] libmachine: Using SSH client type: native
	I0316 00:37:42.353417  130012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I0316 00:37:42.353438  130012 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:37:42.645149  130012 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:37:42.645184  130012 main.go:141] libmachine: Checking connection to Docker...
	I0316 00:37:42.645195  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetURL
	I0316 00:37:42.646464  130012 main.go:141] libmachine: (kindnet-869135) DBG | Using libvirt version 6000000
	I0316 00:37:42.648793  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.649099  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.649132  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.649267  130012 main.go:141] libmachine: Docker is up and running!
	I0316 00:37:42.649289  130012 main.go:141] libmachine: Reticulating splines...
	I0316 00:37:42.649299  130012 client.go:171] duration metric: took 24.834864629s to LocalClient.Create
	I0316 00:37:42.649324  130012 start.go:167] duration metric: took 24.834929221s to libmachine.API.Create "kindnet-869135"
	I0316 00:37:42.649336  130012 start.go:293] postStartSetup for "kindnet-869135" (driver="kvm2")
	I0316 00:37:42.649348  130012 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:37:42.649378  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:42.649668  130012 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:37:42.649702  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.652146  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.652516  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.652544  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.652701  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.652876  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.653052  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.653213  130012 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa Username:docker}
	I0316 00:37:42.738314  130012 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:37:42.742985  130012 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:37:42.743008  130012 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:37:42.743069  130012 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:37:42.743152  130012 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:37:42.743289  130012 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:37:42.753029  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:37:42.778719  130012 start.go:296] duration metric: took 129.366534ms for postStartSetup
	I0316 00:37:42.778768  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetConfigRaw
	I0316 00:37:42.779450  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetIP
	I0316 00:37:42.782266  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.782600  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.782620  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.782870  130012 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/config.json ...
	I0316 00:37:42.783050  130012 start.go:128] duration metric: took 24.99008178s to createHost
	I0316 00:37:42.783073  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.785169  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.785532  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.785562  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.785677  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.785882  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.786046  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.786187  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.786328  130012 main.go:141] libmachine: Using SSH client type: native
	I0316 00:37:42.786523  130012 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.68 22 <nil> <nil>}
	I0316 00:37:42.786536  130012 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:37:42.896378  130012 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710549462.876186493
	
	I0316 00:37:42.896405  130012 fix.go:216] guest clock: 1710549462.876186493
	I0316 00:37:42.896416  130012 fix.go:229] Guest: 2024-03-16 00:37:42.876186493 +0000 UTC Remote: 2024-03-16 00:37:42.783061103 +0000 UTC m=+43.712873000 (delta=93.12539ms)
	I0316 00:37:42.896444  130012 fix.go:200] guest clock delta is within tolerance: 93.12539ms
	I0316 00:37:42.896453  130012 start.go:83] releasing machines lock for "kindnet-869135", held for 25.103693829s
	I0316 00:37:42.896484  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:42.896823  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetIP
	I0316 00:37:42.899884  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.900293  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.900322  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.900544  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:42.901057  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:42.901226  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:37:42.901296  130012 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:37:42.901337  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.901449  130012 ssh_runner.go:195] Run: cat /version.json
	I0316 00:37:42.901490  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:37:42.904185  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.904474  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.904520  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.904545  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.904709  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.904853  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.904958  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:42.904983  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:42.905018  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.905100  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:37:42.905166  130012 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa Username:docker}
	I0316 00:37:42.905244  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:37:42.905366  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:37:42.905578  130012 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa Username:docker}
	I0316 00:37:42.984934  130012 ssh_runner.go:195] Run: systemctl --version
	I0316 00:37:43.009322  130012 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:37:43.177194  130012 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:37:43.184610  130012 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:37:43.184675  130012 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:37:43.201826  130012 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:37:43.201852  130012 start.go:494] detecting cgroup driver to use...
	I0316 00:37:43.201939  130012 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:37:43.219013  130012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:37:43.234275  130012 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:37:43.234356  130012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:37:43.249029  130012 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:37:43.265842  130012 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:37:43.393745  130012 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:37:43.563850  130012 docker.go:233] disabling docker service ...
	I0316 00:37:43.563935  130012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:37:43.584138  130012 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:37:43.603741  130012 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:37:43.723906  130012 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:37:43.869947  130012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:37:43.885460  130012 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:37:43.908845  130012 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:37:43.908914  130012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:37:43.921018  130012 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:37:43.921088  130012 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:37:43.933927  130012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:37:43.946631  130012 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:37:43.958403  130012 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:37:43.970726  130012 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:37:43.981894  130012 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:37:43.981965  130012 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:37:43.998626  130012 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:37:44.009274  130012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:37:44.186297  130012 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:37:44.344727  130012 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:37:44.344811  130012 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:37:44.350449  130012 start.go:562] Will wait 60s for crictl version
	I0316 00:37:44.350517  130012 ssh_runner.go:195] Run: which crictl
	I0316 00:37:44.354695  130012 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:37:44.394263  130012 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:37:44.394356  130012 ssh_runner.go:195] Run: crio --version
	I0316 00:37:44.425636  130012 ssh_runner.go:195] Run: crio --version
	I0316 00:37:44.460914  130012 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:37:42.922130  130439 main.go:141] libmachine: (newest-cni-143629) Calling .Start
	I0316 00:37:42.922303  130439 main.go:141] libmachine: (newest-cni-143629) Ensuring networks are active...
	I0316 00:37:42.923134  130439 main.go:141] libmachine: (newest-cni-143629) Ensuring network default is active
	I0316 00:37:42.923540  130439 main.go:141] libmachine: (newest-cni-143629) Ensuring network mk-newest-cni-143629 is active
	I0316 00:37:42.924086  130439 main.go:141] libmachine: (newest-cni-143629) Getting domain xml...
	I0316 00:37:42.924814  130439 main.go:141] libmachine: (newest-cni-143629) Creating domain...
	I0316 00:37:44.230323  130439 main.go:141] libmachine: (newest-cni-143629) Waiting to get IP...
	I0316 00:37:44.231217  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:44.231678  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:44.231719  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:44.231649  130527 retry.go:31] will retry after 220.76171ms: waiting for machine to come up
	I0316 00:37:44.454365  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:44.455005  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:44.455033  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:44.454917  130527 retry.go:31] will retry after 335.812832ms: waiting for machine to come up
	I0316 00:37:41.575876  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:42.075760  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:42.575311  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:43.074934  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:43.575182  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:44.075136  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:44.575025  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:45.075780  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:45.575154  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:46.075420  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:44.462437  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetIP
	I0316 00:37:44.465154  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:44.465481  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:37:44.465511  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:37:44.465671  130012 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:37:44.470449  130012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:37:44.484018  130012 kubeadm.go:877] updating cluster {Name:kindnet-869135 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:kindnet-869135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort
:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:37:44.484124  130012 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:37:44.484184  130012 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:37:44.518959  130012 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:37:44.519050  130012 ssh_runner.go:195] Run: which lz4
	I0316 00:37:44.524926  130012 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:37:44.529743  130012 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:37:44.529779  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:37:46.338977  130012 crio.go:444] duration metric: took 1.814090593s to copy over tarball
	I0316 00:37:46.339066  130012 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:37:44.792685  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:44.793397  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:44.793455  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:44.793367  130527 retry.go:31] will retry after 376.116466ms: waiting for machine to come up
	I0316 00:37:45.171132  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:45.171834  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:45.171869  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:45.171700  130527 retry.go:31] will retry after 422.09258ms: waiting for machine to come up
	I0316 00:37:45.595686  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:45.596338  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:45.596365  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:45.596281  130527 retry.go:31] will retry after 555.165839ms: waiting for machine to come up
	I0316 00:37:46.153122  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:46.153642  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:46.153678  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:46.153582  130527 retry.go:31] will retry after 731.596499ms: waiting for machine to come up
	I0316 00:37:46.886535  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:46.887050  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:46.887079  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:46.887004  130527 retry.go:31] will retry after 795.163775ms: waiting for machine to come up
	I0316 00:37:47.684021  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:47.684499  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:47.684531  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:47.684469  130527 retry.go:31] will retry after 1.238807385s: waiting for machine to come up
	I0316 00:37:48.925275  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:48.925865  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:48.925890  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:48.925842  130527 retry.go:31] will retry after 1.146021342s: waiting for machine to come up
	I0316 00:37:46.575151  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:47.075040  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:47.574985  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:48.075423  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:48.575767  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:49.075757  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:49.575679  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:50.224640  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:50.952507  129541 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:37:51.258112  129541 kubeadm.go:1107] duration metric: took 13.365669096s to wait for elevateKubeSystemPrivileges
	W0316 00:37:51.258149  129541 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:37:51.258159  129541 kubeadm.go:393] duration metric: took 24.957732225s to StartCluster
	I0316 00:37:51.258185  129541 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:51.258269  129541 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:37:51.259734  129541 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:51.259987  129541 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.50.112 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:37:51.262135  129541 out.go:177] * Verifying Kubernetes components...
	I0316 00:37:51.260133  129541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0316 00:37:51.260145  129541 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:37:51.260419  129541 config.go:182] Loaded profile config "auto-869135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:37:51.263497  129541 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:37:51.263508  129541 addons.go:69] Setting storage-provisioner=true in profile "auto-869135"
	I0316 00:37:51.263526  129541 addons.go:69] Setting default-storageclass=true in profile "auto-869135"
	I0316 00:37:51.263550  129541 addons.go:234] Setting addon storage-provisioner=true in "auto-869135"
	I0316 00:37:51.263574  129541 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-869135"
	I0316 00:37:51.263586  129541 host.go:66] Checking if "auto-869135" exists ...
	I0316 00:37:51.264056  129541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:51.264081  129541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:51.264125  129541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:51.264211  129541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:51.280811  129541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I0316 00:37:51.281310  129541 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:51.281968  129541 main.go:141] libmachine: Using API Version  1
	I0316 00:37:51.281991  129541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:51.282450  129541 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:51.283203  129541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:51.283252  129541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:51.283695  129541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41127
	I0316 00:37:51.284174  129541 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:51.284712  129541 main.go:141] libmachine: Using API Version  1
	I0316 00:37:51.284729  129541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:51.285222  129541 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:51.285407  129541 main.go:141] libmachine: (auto-869135) Calling .GetState
	I0316 00:37:51.288827  129541 addons.go:234] Setting addon default-storageclass=true in "auto-869135"
	I0316 00:37:51.288867  129541 host.go:66] Checking if "auto-869135" exists ...
	I0316 00:37:51.289246  129541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:51.289286  129541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:51.305313  129541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38681
	I0316 00:37:51.305793  129541 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:51.306392  129541 main.go:141] libmachine: Using API Version  1
	I0316 00:37:51.306485  129541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46657
	I0316 00:37:51.306644  129541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:51.306981  129541 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:51.307056  129541 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:51.307245  129541 main.go:141] libmachine: (auto-869135) Calling .GetState
	I0316 00:37:51.307581  129541 main.go:141] libmachine: Using API Version  1
	I0316 00:37:51.307599  129541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:51.307911  129541 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:51.308476  129541 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:37:51.308521  129541 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:37:51.315501  129541 main.go:141] libmachine: (auto-869135) Calling .DriverName
	I0316 00:37:51.317994  129541 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:37:51.319350  129541 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:37:51.319368  129541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:37:51.319389  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHHostname
	I0316 00:37:51.322526  129541 main.go:141] libmachine: (auto-869135) DBG | domain auto-869135 has defined MAC address 52:54:00:71:8d:ff in network mk-auto-869135
	I0316 00:37:51.323134  129541 main.go:141] libmachine: (auto-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:8d:ff", ip: ""} in network mk-auto-869135: {Iface:virbr2 ExpiryTime:2024-03-16 01:37:06 +0000 UTC Type:0 Mac:52:54:00:71:8d:ff Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:auto-869135 Clientid:01:52:54:00:71:8d:ff}
	I0316 00:37:51.323166  129541 main.go:141] libmachine: (auto-869135) DBG | domain auto-869135 has defined IP address 192.168.50.112 and MAC address 52:54:00:71:8d:ff in network mk-auto-869135
	I0316 00:37:51.323409  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHPort
	I0316 00:37:51.323578  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHKeyPath
	I0316 00:37:51.323723  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHUsername
	I0316 00:37:51.323867  129541 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135/id_rsa Username:docker}
	I0316 00:37:51.328957  129541 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46055
	I0316 00:37:51.329310  129541 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:37:51.329877  129541 main.go:141] libmachine: Using API Version  1
	I0316 00:37:51.329896  129541 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:37:51.330267  129541 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:37:51.330445  129541 main.go:141] libmachine: (auto-869135) Calling .GetState
	I0316 00:37:51.332023  129541 main.go:141] libmachine: (auto-869135) Calling .DriverName
	I0316 00:37:51.332402  129541 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:37:51.332418  129541 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:37:51.332438  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHHostname
	I0316 00:37:51.335521  129541 main.go:141] libmachine: (auto-869135) DBG | domain auto-869135 has defined MAC address 52:54:00:71:8d:ff in network mk-auto-869135
	I0316 00:37:51.335963  129541 main.go:141] libmachine: (auto-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:71:8d:ff", ip: ""} in network mk-auto-869135: {Iface:virbr2 ExpiryTime:2024-03-16 01:37:06 +0000 UTC Type:0 Mac:52:54:00:71:8d:ff Iaid: IPaddr:192.168.50.112 Prefix:24 Hostname:auto-869135 Clientid:01:52:54:00:71:8d:ff}
	I0316 00:37:51.335984  129541 main.go:141] libmachine: (auto-869135) DBG | domain auto-869135 has defined IP address 192.168.50.112 and MAC address 52:54:00:71:8d:ff in network mk-auto-869135
	I0316 00:37:51.336230  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHPort
	I0316 00:37:51.336415  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHKeyPath
	I0316 00:37:51.336567  129541 main.go:141] libmachine: (auto-869135) Calling .GetSSHUsername
	I0316 00:37:51.336715  129541 sshutil.go:53] new ssh client: &{IP:192.168.50.112 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/auto-869135/id_rsa Username:docker}
	I0316 00:37:49.389136  130012 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.050029995s)
	I0316 00:37:49.389178  130012 crio.go:451] duration metric: took 3.050169811s to extract the tarball
	I0316 00:37:49.389188  130012 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:37:49.457583  130012 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:37:49.515897  130012 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:37:49.515926  130012 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:37:49.515934  130012 kubeadm.go:928] updating node { 192.168.61.68 8443 v1.28.4 crio true true} ...
	I0316 00:37:49.516077  130012 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kindnet-869135 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:kindnet-869135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet}
	I0316 00:37:49.516150  130012 ssh_runner.go:195] Run: crio config
	I0316 00:37:49.577319  130012 cni.go:84] Creating CNI manager for "kindnet"
	I0316 00:37:49.577347  130012 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:37:49.577380  130012 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kindnet-869135 NodeName:kindnet-869135 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/ku
bernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:37:49.577556  130012 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kindnet-869135"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:37:49.577620  130012 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:37:49.590360  130012 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:37:49.590439  130012 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:37:49.603336  130012 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0316 00:37:49.623507  130012 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:37:49.643224  130012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2155 bytes)
	I0316 00:37:49.665566  130012 ssh_runner.go:195] Run: grep 192.168.61.68	control-plane.minikube.internal$ /etc/hosts
	I0316 00:37:49.669813  130012 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.68	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:37:49.688143  130012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:37:49.832875  130012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:37:49.853001  130012 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135 for IP: 192.168.61.68
	I0316 00:37:49.853027  130012 certs.go:194] generating shared ca certs ...
	I0316 00:37:49.853062  130012 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:49.853262  130012 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:37:49.853319  130012 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:37:49.853332  130012 certs.go:256] generating profile certs ...
	I0316 00:37:49.853405  130012 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/client.key
	I0316 00:37:49.853423  130012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/client.crt with IP's: []
	I0316 00:37:50.187010  130012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/client.crt ...
	I0316 00:37:50.187044  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/client.crt: {Name:mk9529dc56da74cdfe8cc44161e25e9a5547b120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:50.187247  130012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/client.key ...
	I0316 00:37:50.187262  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/client.key: {Name:mkf63e9d3d28136dd4a2910f917630aaf99a98da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:50.187405  130012 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.key.3cd07f41
	I0316 00:37:50.187425  130012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.crt.3cd07f41 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.68]
	I0316 00:37:50.449001  130012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.crt.3cd07f41 ...
	I0316 00:37:50.449035  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.crt.3cd07f41: {Name:mk36b9e73caec0ac93df47545342d74b62021ed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:50.449243  130012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.key.3cd07f41 ...
	I0316 00:37:50.449262  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.key.3cd07f41: {Name:mk7e84fb8b4d9ff56485583e0d113cd3d2fd665f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:50.449369  130012 certs.go:381] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.crt.3cd07f41 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.crt
	I0316 00:37:50.449478  130012 certs.go:385] copying /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.key.3cd07f41 -> /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.key
	I0316 00:37:50.449560  130012 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.key
	I0316 00:37:50.449585  130012 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.crt with IP's: []
	I0316 00:37:50.606086  130012 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.crt ...
	I0316 00:37:50.606127  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.crt: {Name:mkaea48dbbef411ae19a4718daaa8c1771be33e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:50.606348  130012 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.key ...
	I0316 00:37:50.606370  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.key: {Name:mkef58b7c6058349bdac3bab036e3844bf83c6ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:37:50.606629  130012 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:37:50.606672  130012 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:37:50.606681  130012 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:37:50.606706  130012 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:37:50.606745  130012 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:37:50.606783  130012 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:37:50.606850  130012 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:37:50.607622  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:37:50.645584  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:37:50.698172  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:37:50.727015  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:37:50.754903  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0316 00:37:50.781578  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:37:50.808700  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:37:50.835899  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/kindnet-869135/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:37:50.864076  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:37:50.893231  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:37:50.922521  130012 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:37:50.954131  130012 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:37:50.975757  130012 ssh_runner.go:195] Run: openssl version
	I0316 00:37:50.982277  130012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:37:50.995080  130012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:37:51.000392  130012 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:37:51.000464  130012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:37:51.006888  130012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:37:51.019156  130012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:37:51.030745  130012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:37:51.035954  130012 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:37:51.036024  130012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:37:51.042342  130012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:37:51.054947  130012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:37:51.066720  130012 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:37:51.071617  130012 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:37:51.071666  130012 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:37:51.077686  130012 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:37:51.089477  130012 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:37:51.094606  130012 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0316 00:37:51.094666  130012 kubeadm.go:391] StartCluster: {Name:kindnet-869135 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4
ClusterName:kindnet-869135 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:37:51.094746  130012 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:37:51.094798  130012 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:37:51.144626  130012 cri.go:89] found id: ""
	I0316 00:37:51.144693  130012 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0316 00:37:51.156109  130012 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:37:51.166947  130012 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:37:51.178444  130012 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:37:51.178468  130012 kubeadm.go:156] found existing configuration files:
	
	I0316 00:37:51.178529  130012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:37:51.188868  130012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:37:51.188936  130012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:37:51.199847  130012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:37:51.210338  130012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:37:51.210417  130012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:37:51.221216  130012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:37:51.232741  130012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:37:51.232811  130012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:37:51.243518  130012 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:37:51.258022  130012 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:37:51.258084  130012 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:37:51.272916  130012 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:37:51.397863  130012 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0316 00:37:51.397937  130012 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:37:51.557037  130012 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:37:51.557206  130012 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:37:51.557353  130012 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:37:51.830552  130012 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:37:51.588698  129541 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:37:51.588734  129541 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0316 00:37:51.678979  129541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:37:51.750225  129541 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:37:53.532034  129541 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.943248319s)
	I0316 00:37:53.532073  129541 start.go:948] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0316 00:37:53.533623  129541 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.944884645s)
	I0316 00:37:53.534765  129541 node_ready.go:35] waiting up to 15m0s for node "auto-869135" to be "Ready" ...
	I0316 00:37:53.549733  129541 node_ready.go:49] node "auto-869135" has status "Ready":"True"
	I0316 00:37:53.549755  129541 node_ready.go:38] duration metric: took 14.970013ms for node "auto-869135" to be "Ready" ...
	I0316 00:37:53.549764  129541 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:37:53.565345  129541 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace to be "Ready" ...
	I0316 00:37:53.849277  129541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.170253515s)
	I0316 00:37:53.849338  129541 main.go:141] libmachine: Making call to close driver server
	I0316 00:37:53.849352  129541 main.go:141] libmachine: (auto-869135) Calling .Close
	I0316 00:37:53.849281  129541 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.09901532s)
	I0316 00:37:53.849652  129541 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:37:53.849684  129541 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:37:53.849694  129541 main.go:141] libmachine: Making call to close driver server
	I0316 00:37:53.849702  129541 main.go:141] libmachine: (auto-869135) Calling .Close
	I0316 00:37:53.851444  129541 main.go:141] libmachine: (auto-869135) DBG | Closing plugin on server side
	I0316 00:37:53.851449  129541 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:37:53.851479  129541 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:37:53.851499  129541 main.go:141] libmachine: Making call to close driver server
	I0316 00:37:53.851515  129541 main.go:141] libmachine: (auto-869135) Calling .Close
	I0316 00:37:53.851806  129541 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:37:53.851826  129541 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:37:53.851836  129541 main.go:141] libmachine: Making call to close driver server
	I0316 00:37:53.851844  129541 main.go:141] libmachine: (auto-869135) Calling .Close
	I0316 00:37:53.853001  129541 main.go:141] libmachine: (auto-869135) DBG | Closing plugin on server side
	I0316 00:37:53.853001  129541 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:37:53.853019  129541 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:37:53.871234  129541 main.go:141] libmachine: Making call to close driver server
	I0316 00:37:53.871314  129541 main.go:141] libmachine: (auto-869135) Calling .Close
	I0316 00:37:53.871679  129541 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:37:53.871701  129541 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:37:53.873341  129541 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0316 00:37:51.833314  130012 out.go:204]   - Generating certificates and keys ...
	I0316 00:37:51.833442  130012 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:37:51.833531  130012 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:37:52.093125  130012 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0316 00:37:52.378114  130012 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0316 00:37:52.633530  130012 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0316 00:37:52.756331  130012 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0316 00:37:52.960712  130012 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0316 00:37:52.960919  130012 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kindnet-869135 localhost] and IPs [192.168.61.68 127.0.0.1 ::1]
	I0316 00:37:53.101470  130012 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0316 00:37:53.101682  130012 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kindnet-869135 localhost] and IPs [192.168.61.68 127.0.0.1 ::1]
	I0316 00:37:53.175872  130012 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0316 00:37:53.307989  130012 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0316 00:37:54.121241  130012 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0316 00:37:54.122046  130012 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:37:50.073611  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:50.074104  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:50.074129  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:50.074057  130527 retry.go:31] will retry after 2.213181539s: waiting for machine to come up
	I0316 00:37:52.288613  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:52.289416  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:52.289443  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:52.289333  130527 retry.go:31] will retry after 2.879327689s: waiting for machine to come up
	I0316 00:37:54.537007  130012 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:37:54.623006  130012 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:37:54.796134  130012 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:37:54.844871  130012 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:37:54.845456  130012 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:37:54.847622  130012 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:37:53.874684  129541 addons.go:505] duration metric: took 2.614525406s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0316 00:37:54.037171  129541 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-869135" context rescaled to 1 replicas
	I0316 00:37:55.574279  129541 pod_ready.go:102] pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace has status "Ready":"False"
	I0316 00:37:54.849479  130012 out.go:204]   - Booting up control plane ...
	I0316 00:37:54.849611  130012 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:37:54.849714  130012 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:37:54.850133  130012 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:37:54.875312  130012 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:37:54.877785  130012 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:37:54.877908  130012 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:37:55.040555  130012 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:37:55.171123  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:55.171630  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:55.171662  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:55.171576  130527 retry.go:31] will retry after 2.710247146s: waiting for machine to come up
	I0316 00:37:57.884403  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:37:57.884912  130439 main.go:141] libmachine: (newest-cni-143629) DBG | unable to find current IP address of domain newest-cni-143629 in network mk-newest-cni-143629
	I0316 00:37:57.884944  130439 main.go:141] libmachine: (newest-cni-143629) DBG | I0316 00:37:57.884829  130527 retry.go:31] will retry after 3.498197579s: waiting for machine to come up
	I0316 00:37:58.073066  129541 pod_ready.go:102] pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:00.573408  129541 pod_ready.go:102] pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:01.048787  130012 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.009302 seconds
	I0316 00:38:01.048989  130012 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:38:01.067801  130012 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:38:01.604404  130012 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:38:01.604702  130012 kubeadm.go:309] [mark-control-plane] Marking the node kindnet-869135 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:38:02.121100  130012 kubeadm.go:309] [bootstrap-token] Using token: rupy6g.lui6gq2h5dr6oyer
	I0316 00:38:02.122724  130012 out.go:204]   - Configuring RBAC rules ...
	I0316 00:38:02.122879  130012 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:38:02.133312  130012 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:38:02.148448  130012 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:38:02.152340  130012 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:38:02.159691  130012 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:38:02.165011  130012 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:38:02.182831  130012 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:38:02.438647  130012 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:38:02.557508  130012 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:38:02.558172  130012 kubeadm.go:309] 
	I0316 00:38:02.558267  130012 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:38:02.558288  130012 kubeadm.go:309] 
	I0316 00:38:02.558399  130012 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:38:02.558420  130012 kubeadm.go:309] 
	I0316 00:38:02.558460  130012 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:38:02.558556  130012 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:38:02.558624  130012 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:38:02.558634  130012 kubeadm.go:309] 
	I0316 00:38:02.558698  130012 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:38:02.558726  130012 kubeadm.go:309] 
	I0316 00:38:02.558834  130012 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:38:02.558844  130012 kubeadm.go:309] 
	I0316 00:38:02.558915  130012 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:38:02.559048  130012 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:38:02.559143  130012 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:38:02.559155  130012 kubeadm.go:309] 
	I0316 00:38:02.559257  130012 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:38:02.559374  130012 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:38:02.559386  130012 kubeadm.go:309] 
	I0316 00:38:02.559477  130012 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token rupy6g.lui6gq2h5dr6oyer \
	I0316 00:38:02.559603  130012 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:38:02.559636  130012 kubeadm.go:309] 	--control-plane 
	I0316 00:38:02.559650  130012 kubeadm.go:309] 
	I0316 00:38:02.559754  130012 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:38:02.559764  130012 kubeadm.go:309] 
	I0316 00:38:02.559853  130012 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token rupy6g.lui6gq2h5dr6oyer \
	I0316 00:38:02.559969  130012 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:38:02.561977  130012 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:38:02.562012  130012 cni.go:84] Creating CNI manager for "kindnet"
	I0316 00:38:02.563794  130012 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0316 00:38:02.565335  130012 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0316 00:38:02.605266  130012 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0316 00:38:02.605293  130012 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0316 00:38:02.629799  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0316 00:38:03.716053  130012 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.086215117s)
	I0316 00:38:03.716102  130012 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:38:03.716183  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:03.716224  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-869135 minikube.k8s.io/updated_at=2024_03_16T00_38_03_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=kindnet-869135 minikube.k8s.io/primary=true
	I0316 00:38:03.884216  130012 ops.go:34] apiserver oom_adj: -16
	I0316 00:38:03.909879  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:01.384318  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.384901  130439 main.go:141] libmachine: (newest-cni-143629) Found IP for machine: 192.168.39.122
	I0316 00:38:01.384926  130439 main.go:141] libmachine: (newest-cni-143629) Reserving static IP address...
	I0316 00:38:01.384956  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has current primary IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.385374  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "newest-cni-143629", mac: "52:54:00:8b:6b:4d", ip: "192.168.39.122"} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.385407  130439 main.go:141] libmachine: (newest-cni-143629) Reserved static IP address: 192.168.39.122
	I0316 00:38:01.385432  130439 main.go:141] libmachine: (newest-cni-143629) DBG | skip adding static IP to network mk-newest-cni-143629 - found existing host DHCP lease matching {name: "newest-cni-143629", mac: "52:54:00:8b:6b:4d", ip: "192.168.39.122"}
	I0316 00:38:01.385449  130439 main.go:141] libmachine: (newest-cni-143629) DBG | Getting to WaitForSSH function...
	I0316 00:38:01.385457  130439 main.go:141] libmachine: (newest-cni-143629) Waiting for SSH to be available...
	I0316 00:38:01.387532  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.387854  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.387883  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.387903  130439 main.go:141] libmachine: (newest-cni-143629) DBG | Using SSH client type: external
	I0316 00:38:01.387958  130439 main.go:141] libmachine: (newest-cni-143629) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa (-rw-------)
	I0316 00:38:01.387988  130439 main.go:141] libmachine: (newest-cni-143629) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.122 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:38:01.388004  130439 main.go:141] libmachine: (newest-cni-143629) DBG | About to run SSH command:
	I0316 00:38:01.388017  130439 main.go:141] libmachine: (newest-cni-143629) DBG | exit 0
	I0316 00:38:01.519302  130439 main.go:141] libmachine: (newest-cni-143629) DBG | SSH cmd err, output: <nil>: 
	I0316 00:38:01.519673  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetConfigRaw
	I0316 00:38:01.520467  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:38:01.523141  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.523552  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.523593  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.523807  130439 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/config.json ...
	I0316 00:38:01.524030  130439 machine.go:94] provisionDockerMachine start ...
	I0316 00:38:01.524056  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:38:01.524286  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:01.526593  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.526973  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.527002  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.527131  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:01.527334  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.527504  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.527646  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:01.527840  130439 main.go:141] libmachine: Using SSH client type: native
	I0316 00:38:01.528070  130439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:38:01.528085  130439 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:38:01.635760  130439 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:38:01.635788  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:38:01.636041  130439 buildroot.go:166] provisioning hostname "newest-cni-143629"
	I0316 00:38:01.636079  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:38:01.636265  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:01.639554  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.640053  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.640080  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.640271  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:01.640489  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.640676  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.640843  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:01.641010  130439 main.go:141] libmachine: Using SSH client type: native
	I0316 00:38:01.641209  130439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:38:01.641228  130439 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-143629 && echo "newest-cni-143629" | sudo tee /etc/hostname
	I0316 00:38:01.764666  130439 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-143629
	
	I0316 00:38:01.764697  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:01.767795  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.768205  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.768229  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.768484  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:01.768725  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.768943  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.769128  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:01.769360  130439 main.go:141] libmachine: Using SSH client type: native
	I0316 00:38:01.769628  130439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:38:01.769656  130439 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-143629' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-143629/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-143629' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:38:01.889763  130439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:38:01.889796  130439 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:38:01.889816  130439 buildroot.go:174] setting up certificates
	I0316 00:38:01.889827  130439 provision.go:84] configureAuth start
	I0316 00:38:01.889835  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetMachineName
	I0316 00:38:01.890185  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:38:01.892912  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.893364  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.893391  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.893487  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:01.895991  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.896346  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.896360  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.896528  130439 provision.go:143] copyHostCerts
	I0316 00:38:01.896607  130439 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:38:01.896618  130439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:38:01.896687  130439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:38:01.896833  130439 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:38:01.896847  130439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:38:01.896876  130439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:38:01.896945  130439 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:38:01.896955  130439 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:38:01.896980  130439 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:38:01.897057  130439 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.newest-cni-143629 san=[127.0.0.1 192.168.39.122 localhost minikube newest-cni-143629]
	I0316 00:38:01.969575  130439 provision.go:177] copyRemoteCerts
	I0316 00:38:01.969643  130439 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:38:01.969677  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:01.972598  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.973007  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:01.973032  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:01.973218  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:01.973450  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:01.973622  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:01.973778  130439 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:38:02.058714  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:38:02.085597  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:38:02.112457  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0316 00:38:02.139577  130439 provision.go:87] duration metric: took 249.735137ms to configureAuth
	I0316 00:38:02.139614  130439 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:38:02.139860  130439 config.go:182] Loaded profile config "newest-cni-143629": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:38:02.139975  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:02.143353  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.143735  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:02.143781  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.143968  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:02.144179  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.144351  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.144512  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:02.144712  130439 main.go:141] libmachine: Using SSH client type: native
	I0316 00:38:02.144916  130439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:38:02.144936  130439 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:38:02.441355  130439 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:38:02.441389  130439 machine.go:97] duration metric: took 917.341076ms to provisionDockerMachine
	I0316 00:38:02.441404  130439 start.go:293] postStartSetup for "newest-cni-143629" (driver="kvm2")
	I0316 00:38:02.441419  130439 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:38:02.441441  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:38:02.441849  130439 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:38:02.441888  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:02.445165  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.445562  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:02.445598  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.445770  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:02.445982  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.446148  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:02.446328  130439 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:38:02.535255  130439 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:38:02.539971  130439 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:38:02.540034  130439 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:38:02.540112  130439 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:38:02.540294  130439 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:38:02.540455  130439 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:38:02.553306  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:38:02.585353  130439 start.go:296] duration metric: took 143.935274ms for postStartSetup
	I0316 00:38:02.585418  130439 fix.go:56] duration metric: took 19.688704754s for fixHost
	I0316 00:38:02.585443  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:02.588470  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.588865  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:02.588896  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.589154  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:02.589410  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.589636  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.589837  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:02.590072  130439 main.go:141] libmachine: Using SSH client type: native
	I0316 00:38:02.590295  130439 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.122 22 <nil> <nil>}
	I0316 00:38:02.590313  130439 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:38:02.705654  130439 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710549482.683189828
	
	I0316 00:38:02.705684  130439 fix.go:216] guest clock: 1710549482.683189828
	I0316 00:38:02.705694  130439 fix.go:229] Guest: 2024-03-16 00:38:02.683189828 +0000 UTC Remote: 2024-03-16 00:38:02.585423113 +0000 UTC m=+27.895775514 (delta=97.766715ms)
	I0316 00:38:02.705744  130439 fix.go:200] guest clock delta is within tolerance: 97.766715ms
	I0316 00:38:02.705761  130439 start.go:83] releasing machines lock for "newest-cni-143629", held for 19.809091147s
	I0316 00:38:02.705789  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:38:02.706092  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:38:02.709182  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.709616  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:02.709645  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.709810  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:38:02.710364  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:38:02.710666  130439 main.go:141] libmachine: (newest-cni-143629) Calling .DriverName
	I0316 00:38:02.710783  130439 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:38:02.710834  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:02.710938  130439 ssh_runner.go:195] Run: cat /version.json
	I0316 00:38:02.710990  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHHostname
	I0316 00:38:02.714009  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.714275  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.714468  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:02.714494  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.714667  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:02.714698  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:02.714760  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:02.714933  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHPort
	I0316 00:38:02.714939  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.715113  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHKeyPath
	I0316 00:38:02.715113  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:02.715279  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetSSHUsername
	I0316 00:38:02.715489  130439 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:38:02.715559  130439 sshutil.go:53] new ssh client: &{IP:192.168.39.122 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/newest-cni-143629/id_rsa Username:docker}
	I0316 00:38:02.798331  130439 ssh_runner.go:195] Run: systemctl --version
	I0316 00:38:02.820902  130439 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:38:02.978556  130439 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:38:02.986793  130439 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:38:02.986886  130439 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:38:03.009883  130439 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:38:03.009922  130439 start.go:494] detecting cgroup driver to use...
	I0316 00:38:03.010031  130439 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:38:03.030121  130439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:38:03.045516  130439 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:38:03.045589  130439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:38:03.061047  130439 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:38:03.079223  130439 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:38:03.205388  130439 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:38:03.369686  130439 docker.go:233] disabling docker service ...
	I0316 00:38:03.369765  130439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:38:03.387183  130439 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:38:03.404702  130439 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:38:03.559368  130439 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:38:03.713273  130439 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:38:03.730331  130439 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:38:03.751158  130439 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:38:03.751230  130439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:38:03.763167  130439 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:38:03.763249  130439 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:38:03.775117  130439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:38:03.786284  130439 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:38:03.797742  130439 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:38:03.813024  130439 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:38:03.825637  130439 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:38:03.825712  130439 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:38:03.839237  130439 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:38:03.849650  130439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:38:03.997642  130439 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:38:04.150911  130439 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:38:04.151000  130439 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:38:04.156201  130439 start.go:562] Will wait 60s for crictl version
	I0316 00:38:04.156251  130439 ssh_runner.go:195] Run: which crictl
	I0316 00:38:04.160427  130439 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:38:04.199836  130439 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:38:04.199967  130439 ssh_runner.go:195] Run: crio --version
	I0316 00:38:04.233780  130439 ssh_runner.go:195] Run: crio --version
	I0316 00:38:04.266850  130439 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:38:04.268370  130439 main.go:141] libmachine: (newest-cni-143629) Calling .GetIP
	I0316 00:38:04.271316  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:04.271806  130439 main.go:141] libmachine: (newest-cni-143629) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:8b:6b:4d", ip: ""} in network mk-newest-cni-143629: {Iface:virbr1 ExpiryTime:2024-03-16 01:37:55 +0000 UTC Type:0 Mac:52:54:00:8b:6b:4d Iaid: IPaddr:192.168.39.122 Prefix:24 Hostname:newest-cni-143629 Clientid:01:52:54:00:8b:6b:4d}
	I0316 00:38:04.271835  130439 main.go:141] libmachine: (newest-cni-143629) DBG | domain newest-cni-143629 has defined IP address 192.168.39.122 and MAC address 52:54:00:8b:6b:4d in network mk-newest-cni-143629
	I0316 00:38:04.272053  130439 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:38:04.276361  130439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:38:04.291997  130439 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0316 00:38:04.293621  130439 kubeadm.go:877] updating cluster {Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHo
stTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:38:04.293798  130439 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:38:04.293899  130439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:38:04.341880  130439 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:38:04.341948  130439 ssh_runner.go:195] Run: which lz4
	I0316 00:38:04.346272  130439 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:38:04.350570  130439 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:38:04.350602  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0316 00:38:02.574104  129541 pod_ready.go:102] pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:03.074088  129541 pod_ready.go:97] pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.112 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-16 00:37:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:n
il Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-16 00:37:52 +0000 UTC,FinishedAt:2024-03-16 00:38:02 +0000 UTC,ContainerID:cri-o://a382d23e38f536606ced6c73fa9301d5f81110d13f8140f11b0d44e1bc83bc3a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://a382d23e38f536606ced6c73fa9301d5f81110d13f8140f11b0d44e1bc83bc3a Started:0xc0031aef10 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0316 00:38:03.074119  129541 pod_ready.go:81] duration metric: took 9.508743344s for pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace to be "Ready" ...
	E0316 00:38:03.074130  129541 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-5g2qh" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-16 00:37:51 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.50.112 HostIPs:[] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-03-16 00:37:51 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns
State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-16 00:37:52 +0000 UTC,FinishedAt:2024-03-16 00:38:02 +0000 UTC,ContainerID:cri-o://a382d23e38f536606ced6c73fa9301d5f81110d13f8140f11b0d44e1bc83bc3a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://a382d23e38f536606ced6c73fa9301d5f81110d13f8140f11b0d44e1bc83bc3a Started:0xc0031aef10 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0316 00:38:03.074137  129541 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:05.083407  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:04.409914  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:04.910316  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:05.410486  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:05.910125  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:06.410575  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:06.910496  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:07.410754  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:07.910180  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:08.410077  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:08.910616  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:05.973661  130439 crio.go:444] duration metric: took 1.627412292s to copy over tarball
	I0316 00:38:05.973804  130439 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:38:08.599315  130439 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.625473802s)
	I0316 00:38:08.599370  130439 crio.go:451] duration metric: took 2.62566321s to extract the tarball
	I0316 00:38:08.599381  130439 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:38:08.641122  130439 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:38:08.688862  130439 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:38:08.688890  130439 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:38:08.688901  130439 kubeadm.go:928] updating node { 192.168.39.122 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:38:08.689056  130439 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --feature-gates=ServerSideApply=true --hostname-override=newest-cni-143629 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.122
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:38:08.689157  130439 ssh_runner.go:195] Run: crio config
	I0316 00:38:08.748094  130439 cni.go:84] Creating CNI manager for ""
	I0316 00:38:08.748117  130439 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:38:08.748129  130439 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0316 00:38:08.748161  130439 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.122 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-143629 NodeName:newest-cni-143629 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.122"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.122 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:38:08.748323  130439 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.122
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-143629"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.122
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.122"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:38:08.748404  130439 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:38:08.759274  130439 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:38:08.759351  130439 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:38:08.769847  130439 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0316 00:38:08.787847  130439 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:38:08.805784  130439 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0316 00:38:08.825422  130439 ssh_runner.go:195] Run: grep 192.168.39.122	control-plane.minikube.internal$ /etc/hosts
	I0316 00:38:08.829671  130439 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.122	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:38:08.843343  130439 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:38:08.991654  130439 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:38:09.013422  130439 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629 for IP: 192.168.39.122
	I0316 00:38:09.013451  130439 certs.go:194] generating shared ca certs ...
	I0316 00:38:09.013476  130439 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:38:09.013680  130439 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:38:09.013774  130439 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:38:09.013793  130439 certs.go:256] generating profile certs ...
	I0316 00:38:09.013961  130439 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/client.key
	I0316 00:38:09.014061  130439 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/apiserver.key.3c0cc93c
	I0316 00:38:09.014121  130439 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/proxy-client.key
	I0316 00:38:09.014298  130439 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:38:09.014347  130439 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:38:09.014363  130439 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:38:09.014406  130439 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:38:09.014457  130439 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:38:09.014499  130439 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:38:09.014563  130439 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:38:09.015278  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:38:09.079129  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:38:09.124685  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:38:09.167920  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:38:09.221029  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:38:09.255371  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:38:09.284375  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:38:09.312812  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:38:09.340379  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:38:09.367368  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:38:09.393645  130439 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:38:09.423543  130439 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:38:09.444146  130439 ssh_runner.go:195] Run: openssl version
	I0316 00:38:09.451158  130439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:38:09.464694  130439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:38:09.469962  130439 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:38:09.470052  130439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:38:09.476665  130439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:38:09.489467  130439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:38:09.501796  130439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:38:09.506661  130439 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:38:09.506728  130439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:38:09.513413  130439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:38:09.528077  130439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:38:09.539742  130439 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:38:09.544812  130439 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:38:09.544874  130439 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:38:09.551032  130439 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:38:09.562484  130439 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:38:09.567378  130439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:38:09.574109  130439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:38:09.581301  130439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:38:09.588152  130439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:38:09.594492  130439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:38:09.600984  130439 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:38:09.607215  130439 kubeadm.go:391] StartCluster: {Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.122 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostT
imeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:38:09.607358  130439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:38:09.607415  130439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:38:09.647348  130439 cri.go:89] found id: ""
	I0316 00:38:09.647429  130439 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:38:09.658062  130439 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:38:09.658083  130439 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:38:09.658088  130439 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:38:09.658136  130439 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:38:09.668657  130439 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:38:09.669385  130439 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-143629" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:38:09.669789  130439 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-143629" cluster setting kubeconfig missing "newest-cni-143629" context setting]
	I0316 00:38:09.670353  130439 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:38:09.671920  130439 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:38:09.681870  130439 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.122
	I0316 00:38:09.681900  130439 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:38:09.681911  130439 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:38:09.681950  130439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:38:09.728421  130439 cri.go:89] found id: ""
	I0316 00:38:09.728518  130439 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:38:09.749192  130439 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:38:07.583090  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:10.082358  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:09.409990  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:09.910769  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:10.410518  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:10.910080  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:11.410205  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:11.910882  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:12.410109  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:12.910516  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:13.410637  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:13.910203  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:09.760110  130439 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:38:09.760135  130439 kubeadm.go:156] found existing configuration files:
	
	I0316 00:38:09.760189  130439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:38:09.769685  130439 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:38:09.769742  130439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:38:09.780415  130439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:38:09.790390  130439 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:38:09.790467  130439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:38:09.801301  130439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:38:09.812250  130439 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:38:09.812336  130439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:38:09.823110  130439 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:38:09.833268  130439 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:38:09.833351  130439 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:38:09.844341  130439 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:38:09.854288  130439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:38:09.975504  130439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:38:10.984117  130439 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.008567665s)
	I0316 00:38:10.984160  130439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:38:11.212231  130439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:38:11.281730  130439 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:38:11.371253  130439 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:38:11.371369  130439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:38:11.871644  130439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:38:12.371851  130439 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:38:12.388654  130439 api_server.go:72] duration metric: took 1.017400606s to wait for apiserver process to appear ...
	I0316 00:38:12.388686  130439 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:38:12.388709  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:12.389247  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": dial tcp 192.168.39.122:8443: connect: connection refused
	I0316 00:38:12.888943  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:14.410519  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:14.910136  130012 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:38:15.053338  130012 kubeadm.go:1107] duration metric: took 11.337203314s to wait for elevateKubeSystemPrivileges
	W0316 00:38:15.053388  130012 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:38:15.053398  130012 kubeadm.go:393] duration metric: took 23.958736684s to StartCluster
	I0316 00:38:15.053419  130012 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:38:15.053500  130012 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:38:15.055113  130012 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:38:15.055351  130012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0316 00:38:15.055367  130012 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.61.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:38:15.057023  130012 out.go:177] * Verifying Kubernetes components...
	I0316 00:38:15.055470  130012 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:38:15.055582  130012 config.go:182] Loaded profile config "kindnet-869135": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:38:15.057070  130012 addons.go:69] Setting default-storageclass=true in profile "kindnet-869135"
	I0316 00:38:15.057120  130012 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kindnet-869135"
	I0316 00:38:15.058721  130012 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:38:15.057061  130012 addons.go:69] Setting storage-provisioner=true in profile "kindnet-869135"
	I0316 00:38:15.058823  130012 addons.go:234] Setting addon storage-provisioner=true in "kindnet-869135"
	I0316 00:38:15.058866  130012 host.go:66] Checking if "kindnet-869135" exists ...
	I0316 00:38:15.057459  130012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:38:15.058907  130012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:38:15.059202  130012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:38:15.059224  130012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:38:15.074233  130012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42913
	I0316 00:38:15.074563  130012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37355
	I0316 00:38:15.074800  130012 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:38:15.075392  130012 main.go:141] libmachine: Using API Version  1
	I0316 00:38:15.075414  130012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:38:15.075462  130012 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:38:15.075821  130012 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:38:15.076080  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetState
	I0316 00:38:15.076330  130012 main.go:141] libmachine: Using API Version  1
	I0316 00:38:15.076362  130012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:38:15.076717  130012 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:38:15.077443  130012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:38:15.077496  130012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:38:15.080403  130012 addons.go:234] Setting addon default-storageclass=true in "kindnet-869135"
	I0316 00:38:15.080450  130012 host.go:66] Checking if "kindnet-869135" exists ...
	I0316 00:38:15.080827  130012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:38:15.080856  130012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:38:15.093672  130012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42099
	I0316 00:38:15.094333  130012 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:38:15.094819  130012 main.go:141] libmachine: Using API Version  1
	I0316 00:38:15.094837  130012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:38:15.095171  130012 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:38:15.095388  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetState
	I0316 00:38:15.097032  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:38:15.098973  130012 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:38:12.082943  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:14.581878  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:15.100418  130012 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:38:15.100437  130012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:38:15.100458  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:38:15.103249  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:38:15.103728  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:38:15.103764  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:38:15.103986  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:38:15.104168  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:38:15.104307  130012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34985
	I0316 00:38:15.104480  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:38:15.104625  130012 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa Username:docker}
	I0316 00:38:15.104981  130012 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:38:15.105606  130012 main.go:141] libmachine: Using API Version  1
	I0316 00:38:15.105618  130012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:38:15.106057  130012 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:38:15.106689  130012 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:38:15.106727  130012 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:38:15.122774  130012 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33405
	I0316 00:38:15.123377  130012 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:38:15.123957  130012 main.go:141] libmachine: Using API Version  1
	I0316 00:38:15.123980  130012 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:38:15.124364  130012 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:38:15.124589  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetState
	I0316 00:38:15.126359  130012 main.go:141] libmachine: (kindnet-869135) Calling .DriverName
	I0316 00:38:15.126735  130012 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:38:15.126753  130012 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:38:15.126773  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHHostname
	I0316 00:38:15.130141  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:38:15.130828  130012 main.go:141] libmachine: (kindnet-869135) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:c2:ba", ip: ""} in network mk-kindnet-869135: {Iface:virbr3 ExpiryTime:2024-03-16 01:37:34 +0000 UTC Type:0 Mac:52:54:00:db:c2:ba Iaid: IPaddr:192.168.61.68 Prefix:24 Hostname:kindnet-869135 Clientid:01:52:54:00:db:c2:ba}
	I0316 00:38:15.130959  130012 main.go:141] libmachine: (kindnet-869135) DBG | domain kindnet-869135 has defined IP address 192.168.61.68 and MAC address 52:54:00:db:c2:ba in network mk-kindnet-869135
	I0316 00:38:15.132140  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHPort
	I0316 00:38:15.132321  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHKeyPath
	I0316 00:38:15.132678  130012 main.go:141] libmachine: (kindnet-869135) Calling .GetSSHUsername
	I0316 00:38:15.132846  130012 sshutil.go:53] new ssh client: &{IP:192.168.61.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/kindnet-869135/id_rsa Username:docker}
	I0316 00:38:15.478458  130012 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:38:15.478537  130012 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0316 00:38:15.581845  130012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:38:15.608611  130012 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:38:16.213168  130012 start.go:948] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0316 00:38:16.214802  130012 node_ready.go:35] waiting up to 15m0s for node "kindnet-869135" to be "Ready" ...
	I0316 00:38:16.608424  130012 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.026535336s)
	I0316 00:38:16.608469  130012 main.go:141] libmachine: Making call to close driver server
	I0316 00:38:16.608480  130012 main.go:141] libmachine: (kindnet-869135) Calling .Close
	I0316 00:38:16.608481  130012 main.go:141] libmachine: Making call to close driver server
	I0316 00:38:16.608501  130012 main.go:141] libmachine: (kindnet-869135) Calling .Close
	I0316 00:38:16.608849  130012 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:38:16.608868  130012 main.go:141] libmachine: (kindnet-869135) DBG | Closing plugin on server side
	I0316 00:38:16.608872  130012 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:38:16.608887  130012 main.go:141] libmachine: Making call to close driver server
	I0316 00:38:16.608896  130012 main.go:141] libmachine: (kindnet-869135) Calling .Close
	I0316 00:38:16.609128  130012 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:38:16.609149  130012 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:38:16.610363  130012 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:38:16.610390  130012 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:38:16.610449  130012 main.go:141] libmachine: (kindnet-869135) DBG | Closing plugin on server side
	I0316 00:38:16.610489  130012 main.go:141] libmachine: Making call to close driver server
	I0316 00:38:16.610499  130012 main.go:141] libmachine: (kindnet-869135) Calling .Close
	I0316 00:38:16.610839  130012 main.go:141] libmachine: (kindnet-869135) DBG | Closing plugin on server side
	I0316 00:38:16.610887  130012 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:38:16.610900  130012 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:38:16.622922  130012 main.go:141] libmachine: Making call to close driver server
	I0316 00:38:16.622974  130012 main.go:141] libmachine: (kindnet-869135) Calling .Close
	I0316 00:38:16.624182  130012 main.go:141] libmachine: (kindnet-869135) DBG | Closing plugin on server side
	I0316 00:38:16.624299  130012 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:38:16.624649  130012 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:38:16.626406  130012 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0316 00:38:16.628205  130012 addons.go:505] duration metric: took 1.572737368s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0316 00:38:16.723996  130012 kapi.go:248] "coredns" deployment in "kube-system" namespace and "kindnet-869135" context rescaled to 1 replicas
	I0316 00:38:18.219982  130012 node_ready.go:53] node "kindnet-869135" has status "Ready":"False"
	I0316 00:38:17.890093  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0316 00:38:17.890174  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:16.582745  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:18.583282  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:21.080849  129541 pod_ready.go:102] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:22.083823  129541 pod_ready.go:92] pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:22.083847  129541 pod_ready.go:81] duration metric: took 19.009696182s for pod "coredns-5dd5756b68-m6x9g" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.083859  129541 pod_ready.go:78] waiting up to 15m0s for pod "etcd-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.088976  129541 pod_ready.go:92] pod "etcd-auto-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:22.088997  129541 pod_ready.go:81] duration metric: took 5.131927ms for pod "etcd-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.089005  129541 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.094934  129541 pod_ready.go:92] pod "kube-apiserver-auto-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:22.094955  129541 pod_ready.go:81] duration metric: took 5.943223ms for pod "kube-apiserver-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.094964  129541 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.100912  129541 pod_ready.go:92] pod "kube-controller-manager-auto-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:22.100936  129541 pod_ready.go:81] duration metric: took 5.965064ms for pod "kube-controller-manager-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.100945  129541 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-266kp" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.105205  129541 pod_ready.go:92] pod "kube-proxy-266kp" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:22.105226  129541 pod_ready.go:81] duration metric: took 4.273826ms for pod "kube-proxy-266kp" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.105237  129541 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.478741  129541 pod_ready.go:92] pod "kube-scheduler-auto-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:22.478766  129541 pod_ready.go:81] duration metric: took 373.521136ms for pod "kube-scheduler-auto-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.478774  129541 pod_ready.go:38] duration metric: took 28.929000989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:38:22.478789  129541 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:38:22.478837  129541 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:38:22.499000  129541 api_server.go:72] duration metric: took 31.238971516s to wait for apiserver process to appear ...
	I0316 00:38:22.499032  129541 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:38:22.499058  129541 api_server.go:253] Checking apiserver healthz at https://192.168.50.112:8443/healthz ...
	I0316 00:38:22.506931  129541 api_server.go:279] https://192.168.50.112:8443/healthz returned 200:
	ok
	I0316 00:38:22.508236  129541 api_server.go:141] control plane version: v1.28.4
	I0316 00:38:22.508260  129541 api_server.go:131] duration metric: took 9.219977ms to wait for apiserver health ...
	I0316 00:38:22.508268  129541 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:38:22.681767  129541 system_pods.go:59] 7 kube-system pods found
	I0316 00:38:22.681804  129541 system_pods.go:61] "coredns-5dd5756b68-m6x9g" [96db03cc-cb5b-48c8-aebb-832b75fb564d] Running
	I0316 00:38:22.681810  129541 system_pods.go:61] "etcd-auto-869135" [b020fd77-cc98-4660-81ab-5381ae5ba8f0] Running
	I0316 00:38:22.681813  129541 system_pods.go:61] "kube-apiserver-auto-869135" [8da5982e-3fd6-4905-8751-5e956cbd94df] Running
	I0316 00:38:22.681817  129541 system_pods.go:61] "kube-controller-manager-auto-869135" [87ef9b8f-bac4-432f-bdc5-5aa2c609729a] Running
	I0316 00:38:22.681819  129541 system_pods.go:61] "kube-proxy-266kp" [0a01c0c9-feb2-4025-9fbd-256acffcecd7] Running
	I0316 00:38:22.681822  129541 system_pods.go:61] "kube-scheduler-auto-869135" [e96da885-d50d-4fc4-bf98-1a613682097b] Running
	I0316 00:38:22.681825  129541 system_pods.go:61] "storage-provisioner" [8505cf12-7e24-4369-8eb1-6715e2d8a9cb] Running
	I0316 00:38:22.681832  129541 system_pods.go:74] duration metric: took 173.557074ms to wait for pod list to return data ...
	I0316 00:38:22.681840  129541 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:38:22.878258  129541 default_sa.go:45] found service account: "default"
	I0316 00:38:22.878297  129541 default_sa.go:55] duration metric: took 196.448207ms for default service account to be created ...
	I0316 00:38:22.878310  129541 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:38:23.081575  129541 system_pods.go:86] 7 kube-system pods found
	I0316 00:38:23.081614  129541 system_pods.go:89] "coredns-5dd5756b68-m6x9g" [96db03cc-cb5b-48c8-aebb-832b75fb564d] Running
	I0316 00:38:23.081624  129541 system_pods.go:89] "etcd-auto-869135" [b020fd77-cc98-4660-81ab-5381ae5ba8f0] Running
	I0316 00:38:23.081630  129541 system_pods.go:89] "kube-apiserver-auto-869135" [8da5982e-3fd6-4905-8751-5e956cbd94df] Running
	I0316 00:38:23.081635  129541 system_pods.go:89] "kube-controller-manager-auto-869135" [87ef9b8f-bac4-432f-bdc5-5aa2c609729a] Running
	I0316 00:38:23.081641  129541 system_pods.go:89] "kube-proxy-266kp" [0a01c0c9-feb2-4025-9fbd-256acffcecd7] Running
	I0316 00:38:23.081646  129541 system_pods.go:89] "kube-scheduler-auto-869135" [e96da885-d50d-4fc4-bf98-1a613682097b] Running
	I0316 00:38:23.081652  129541 system_pods.go:89] "storage-provisioner" [8505cf12-7e24-4369-8eb1-6715e2d8a9cb] Running
	I0316 00:38:23.081662  129541 system_pods.go:126] duration metric: took 203.344405ms to wait for k8s-apps to be running ...
	I0316 00:38:23.081673  129541 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:38:23.081728  129541 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:38:23.098021  129541 system_svc.go:56] duration metric: took 16.339008ms WaitForService to wait for kubelet
	I0316 00:38:23.098058  129541 kubeadm.go:576] duration metric: took 31.838035717s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:38:23.098088  129541 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:38:23.279035  129541 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:38:23.279056  129541 node_conditions.go:123] node cpu capacity is 2
	I0316 00:38:23.279068  129541 node_conditions.go:105] duration metric: took 180.974767ms to run NodePressure ...
	I0316 00:38:23.279080  129541 start.go:240] waiting for startup goroutines ...
	I0316 00:38:23.279089  129541 start.go:245] waiting for cluster config update ...
	I0316 00:38:23.279103  129541 start.go:254] writing updated cluster config ...
	I0316 00:38:23.279420  129541 ssh_runner.go:195] Run: rm -f paused
	I0316 00:38:23.328789  129541 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:38:23.330730  129541 out.go:177] * Done! kubectl is now configured to use "auto-869135" cluster and "default" namespace by default
	I0316 00:38:20.218446  130012 node_ready.go:49] node "kindnet-869135" has status "Ready":"True"
	I0316 00:38:20.218481  130012 node_ready.go:38] duration metric: took 4.003649372s for node "kindnet-869135" to be "Ready" ...
	I0316 00:38:20.218493  130012 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:38:20.226450  130012 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-rfg9n" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:22.234865  130012 pod_ready.go:102] pod "coredns-5dd5756b68-rfg9n" in "kube-system" namespace has status "Ready":"False"
	I0316 00:38:23.236394  130012 pod_ready.go:92] pod "coredns-5dd5756b68-rfg9n" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:23.236426  130012 pod_ready.go:81] duration metric: took 3.009948408s for pod "coredns-5dd5756b68-rfg9n" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.236440  130012 pod_ready.go:78] waiting up to 15m0s for pod "etcd-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.244185  130012 pod_ready.go:92] pod "etcd-kindnet-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:23.244204  130012 pod_ready.go:81] duration metric: took 7.756364ms for pod "etcd-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.244217  130012 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.250613  130012 pod_ready.go:92] pod "kube-apiserver-kindnet-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:23.250633  130012 pod_ready.go:81] duration metric: took 6.409511ms for pod "kube-apiserver-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.250646  130012 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.255159  130012 pod_ready.go:92] pod "kube-controller-manager-kindnet-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:23.255176  130012 pod_ready.go:81] duration metric: took 4.522858ms for pod "kube-controller-manager-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.255185  130012 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-x9hsd" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.259449  130012 pod_ready.go:92] pod "kube-proxy-x9hsd" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:23.259466  130012 pod_ready.go:81] duration metric: took 4.275275ms for pod "kube-proxy-x9hsd" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.259473  130012 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.632509  130012 pod_ready.go:92] pod "kube-scheduler-kindnet-869135" in "kube-system" namespace has status "Ready":"True"
	I0316 00:38:23.632539  130012 pod_ready.go:81] duration metric: took 373.058336ms for pod "kube-scheduler-kindnet-869135" in "kube-system" namespace to be "Ready" ...
	I0316 00:38:23.632551  130012 pod_ready.go:38] duration metric: took 3.414044922s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:38:23.632565  130012 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:38:23.632613  130012 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:38:23.651185  130012 api_server.go:72] duration metric: took 8.595784563s to wait for apiserver process to appear ...
	I0316 00:38:23.651210  130012 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:38:23.651232  130012 api_server.go:253] Checking apiserver healthz at https://192.168.61.68:8443/healthz ...
	I0316 00:38:23.658889  130012 api_server.go:279] https://192.168.61.68:8443/healthz returned 200:
	ok
	I0316 00:38:23.660716  130012 api_server.go:141] control plane version: v1.28.4
	I0316 00:38:23.660740  130012 api_server.go:131] duration metric: took 9.523413ms to wait for apiserver health ...
	I0316 00:38:23.660748  130012 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:38:23.836650  130012 system_pods.go:59] 8 kube-system pods found
	I0316 00:38:23.836705  130012 system_pods.go:61] "coredns-5dd5756b68-rfg9n" [942cbb5d-292a-4ed4-96bd-87cbc0775bc1] Running
	I0316 00:38:23.836719  130012 system_pods.go:61] "etcd-kindnet-869135" [54d97419-83b3-410b-a153-4d25bef08d44] Running
	I0316 00:38:23.836725  130012 system_pods.go:61] "kindnet-f4j2j" [d2ad0c2f-82e0-4fc9-b089-da1c3f753e1d] Running
	I0316 00:38:23.836734  130012 system_pods.go:61] "kube-apiserver-kindnet-869135" [1eb0109b-0bf3-4e24-9f05-bf6b521b1ba1] Running
	I0316 00:38:23.836739  130012 system_pods.go:61] "kube-controller-manager-kindnet-869135" [ab9fd45e-cc3b-4278-af14-21c3e29e1672] Running
	I0316 00:38:23.836749  130012 system_pods.go:61] "kube-proxy-x9hsd" [5ce3c324-b717-45a0-8711-c78d24eab606] Running
	I0316 00:38:23.836756  130012 system_pods.go:61] "kube-scheduler-kindnet-869135" [b19d4e54-a766-4be6-817d-5be2096bb8fa] Running
	I0316 00:38:23.836764  130012 system_pods.go:61] "storage-provisioner" [9668312f-3f02-4197-9ed3-5bd79b7e0e46] Running
	I0316 00:38:23.836772  130012 system_pods.go:74] duration metric: took 176.017386ms to wait for pod list to return data ...
	I0316 00:38:23.836782  130012 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:38:24.030654  130012 default_sa.go:45] found service account: "default"
	I0316 00:38:24.030685  130012 default_sa.go:55] duration metric: took 193.893618ms for default service account to be created ...
	I0316 00:38:24.030695  130012 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:38:24.234209  130012 system_pods.go:86] 8 kube-system pods found
	I0316 00:38:24.234244  130012 system_pods.go:89] "coredns-5dd5756b68-rfg9n" [942cbb5d-292a-4ed4-96bd-87cbc0775bc1] Running
	I0316 00:38:24.234252  130012 system_pods.go:89] "etcd-kindnet-869135" [54d97419-83b3-410b-a153-4d25bef08d44] Running
	I0316 00:38:24.234258  130012 system_pods.go:89] "kindnet-f4j2j" [d2ad0c2f-82e0-4fc9-b089-da1c3f753e1d] Running
	I0316 00:38:24.234263  130012 system_pods.go:89] "kube-apiserver-kindnet-869135" [1eb0109b-0bf3-4e24-9f05-bf6b521b1ba1] Running
	I0316 00:38:24.234270  130012 system_pods.go:89] "kube-controller-manager-kindnet-869135" [ab9fd45e-cc3b-4278-af14-21c3e29e1672] Running
	I0316 00:38:24.234276  130012 system_pods.go:89] "kube-proxy-x9hsd" [5ce3c324-b717-45a0-8711-c78d24eab606] Running
	I0316 00:38:24.234281  130012 system_pods.go:89] "kube-scheduler-kindnet-869135" [b19d4e54-a766-4be6-817d-5be2096bb8fa] Running
	I0316 00:38:24.234289  130012 system_pods.go:89] "storage-provisioner" [9668312f-3f02-4197-9ed3-5bd79b7e0e46] Running
	I0316 00:38:24.234299  130012 system_pods.go:126] duration metric: took 203.595828ms to wait for k8s-apps to be running ...
	I0316 00:38:24.234314  130012 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:38:24.234379  130012 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:38:24.256948  130012 system_svc.go:56] duration metric: took 22.623685ms WaitForService to wait for kubelet
	I0316 00:38:24.256981  130012 kubeadm.go:576] duration metric: took 9.201586851s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:38:24.257008  130012 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:38:24.430587  130012 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:38:24.430623  130012 node_conditions.go:123] node cpu capacity is 2
	I0316 00:38:24.430639  130012 node_conditions.go:105] duration metric: took 173.624826ms to run NodePressure ...
	I0316 00:38:24.430655  130012 start.go:240] waiting for startup goroutines ...
	I0316 00:38:24.430668  130012 start.go:245] waiting for cluster config update ...
	I0316 00:38:24.430684  130012 start.go:254] writing updated cluster config ...
	I0316 00:38:24.430993  130012 ssh_runner.go:195] Run: rm -f paused
	I0316 00:38:24.482944  130012 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:38:24.485655  130012 out.go:177] * Done! kubectl is now configured to use "kindnet-869135" cluster and "default" namespace by default
	I0316 00:38:22.890939  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0316 00:38:22.890979  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:27.891211  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0316 00:38:27.891266  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:32.881326  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": read tcp 192.168.39.1:44766->192.168.39.122:8443: read: connection reset by peer
	I0316 00:38:32.881374  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:32.881911  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": dial tcp 192.168.39.122:8443: connect: connection refused
	I0316 00:38:32.889060  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:32.889647  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": dial tcp 192.168.39.122:8443: connect: connection refused
	I0316 00:38:33.389323  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:33.390146  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": dial tcp 192.168.39.122:8443: connect: connection refused
	I0316 00:38:33.889550  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:38.890579  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0316 00:38:38.890630  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	I0316 00:38:43.890991  130439 api_server.go:269] stopped: https://192.168.39.122:8443/healthz: Get "https://192.168.39.122:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0316 00:38:43.891031  130439 api_server.go:253] Checking apiserver healthz at https://192.168.39.122:8443/healthz ...
	
	
	==> CRI-O <==
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.364105339Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549529364066365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f226a9eb-15df-48ab-828c-5ef92f8d19f4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.365859339Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a7f99ddd-b6ab-4c90-a53e-abb02b3cb114 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.365922972Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a7f99ddd-b6ab-4c90-a53e-abb02b3cb114 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.366138480Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a7f99ddd-b6ab-4c90-a53e-abb02b3cb114 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.442853101Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08c2f422-84b5-4712-8a4d-2a03ef53b83b name=/runtime.v1.RuntimeService/Version
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.443130188Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08c2f422-84b5-4712-8a4d-2a03ef53b83b name=/runtime.v1.RuntimeService/Version
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.445388175Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=283141db-4ae8-4e9a-a953-f3d302d22669 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.446083643Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549529446044574,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=283141db-4ae8-4e9a-a953-f3d302d22669 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.447196089Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=59c1cc6f-483c-4887-86c3-4de6f59c81ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.447254202Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=59c1cc6f-483c-4887-86c3-4de6f59c81ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.447465608Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=59c1cc6f-483c-4887-86c3-4de6f59c81ac name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.490845189Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ce745c22-8f20-409c-a075-dec257ac90e4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.490946939Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ce745c22-8f20-409c-a075-dec257ac90e4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.492103545Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=20730140-eeb5-4917-9e64-46c2f8ce3c09 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.493057949Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549529493031908,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=20730140-eeb5-4917-9e64-46c2f8ce3c09 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.493723255Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=af53e2d2-116f-47fe-a807-a2bc7b8a73d7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.493796310Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=af53e2d2-116f-47fe-a807-a2bc7b8a73d7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.493996237Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=af53e2d2-116f-47fe-a807-a2bc7b8a73d7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.533261129Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=064a35b3-bab2-4d10-9d70-35e06d6a70eb name=/runtime.v1.RuntimeService/Version
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.533380142Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=064a35b3-bab2-4d10-9d70-35e06d6a70eb name=/runtime.v1.RuntimeService/Version
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.534841721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d3b94a18-799c-4ed2-8bd4-2fa165ca9a24 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.535523254Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549529535276479,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d3b94a18-799c-4ed2-8bd4-2fa165ca9a24 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.536252059Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3fa40bcb-bc50-45bc-b048-be341ea88f24 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.536330184Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3fa40bcb-bc50-45bc-b048-be341ea88f24 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:38:49 default-k8s-diff-port-313436 crio[691]: time="2024-03-16 00:38:49.536678861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548258197504675,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-ddf096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 2,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28ae448dbfc1dc8f3d784d67ff4d05d4093c740b94ae849c500af8f0e73575b5,PodSandboxId:4ca8689dbd875c33fa1c2e29e2d20116f9916a674620f0298702918f3e7a2b44,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710548236828125150,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 988b8366-de69-435e-ac7d-c5d42dafc4b1,},Annotations:map[string]string{io.kubernetes.container.hash: 23fa5469,io.kubernetes.container.restartCount: 1,io.kubernetes.container.te
rminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26,PodSandboxId:e25bc5b6970759f972fa84cf84226bc3daaf505eab95b8a0e395c100387e2bcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710548235030503580,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-w9fx2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2d2fba6b-c237-4590-b025-bd92eda84778,},Annotations:map[string]string{io.kubernetes.container.hash: 1111ef08,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"n
ame\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6,PodSandboxId:109211d3e5b0510bb681355e628d0cb77a033c7f5d383196aefd04b7f89c6426,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710548227343107611,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-btmmm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7f49417-ca5
0-4c73-b3e7-378b5efffdfe,},Annotations:map[string]string{io.kubernetes.container.hash: 5900722c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335,PodSandboxId:85449f9cd07fb2488b4fe42aa3f5297468e1f0ecb793d9143c4877a9a6576ea3,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710548227357650993,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c272b778-0e60-4d40-826c-d
df096529b5b,},Annotations:map[string]string{io.kubernetes.container.hash: 3dd830,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb,PodSandboxId:ce63475bfdf792823bb28ac8bb62dbda846bbff439be434d2317ea6c17d40221,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710548222697207259,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9caa7adba5b19c43afffc58e7ba24099,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 9f559d56,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a,PodSandboxId:355142258a6470acff3356b30c093bfb0168d0e04e1c63b16719138973cdf1d3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710548222672396494,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e01f6967bba48199640d38efc550f6c,},Annotations:map[string
]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57,PodSandboxId:0d85c2214b0b6a38a98479eed2e4d1158b9439d8e2acc6db5b1b4e9ee2f29f39,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710548222676068921,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7b296b1fd35d419031cde1de328730b
,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012,PodSandboxId:88bf54752601cd3ddc3e21afad78907532dadca13bfae6d268b1afa51b675f43,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710548222585040588,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-313436,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6200347b170da85aa5ddf88e00074011,}
,Annotations:map[string]string{io.kubernetes.container.hash: c8f7fd8c,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3fa40bcb-bc50-45bc-b048-be341ea88f24 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	663378c6a7e6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   85449f9cd07fb       storage-provisioner
	28ae448dbfc1d       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   4ca8689dbd875       busybox
	9d8b76dc25828       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      21 minutes ago      Running             coredns                   1                   e25bc5b697075       coredns-5dd5756b68-w9fx2
	4ed399796d792       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   85449f9cd07fb       storage-provisioner
	81911669b0855       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      21 minutes ago      Running             kube-proxy                1                   109211d3e5b05       kube-proxy-btmmm
	472e7252cc27d       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      21 minutes ago      Running             etcd                      1                   ce63475bfdf79       etcd-default-k8s-diff-port-313436
	1d277e87ef306       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      21 minutes ago      Running             kube-controller-manager   1                   0d85c2214b0b6       kube-controller-manager-default-k8s-diff-port-313436
	06a79188858d0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      21 minutes ago      Running             kube-scheduler            1                   355142258a647       kube-scheduler-default-k8s-diff-port-313436
	1ea844db70263       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      21 minutes ago      Running             kube-apiserver            1                   88bf54752601c       kube-apiserver-default-k8s-diff-port-313436
	
	
	==> coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:55130 - 21089 "HINFO IN 5248382490005511924.3970797499207171790. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.020344801s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-313436
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-313436
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=default-k8s-diff-port-313436
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_09_30_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:09:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-313436
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:38:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:38:01 +0000   Sat, 16 Mar 2024 00:09:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:38:01 +0000   Sat, 16 Mar 2024 00:09:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:38:01 +0000   Sat, 16 Mar 2024 00:09:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:38:01 +0000   Sat, 16 Mar 2024 00:17:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.198
	  Hostname:    default-k8s-diff-port-313436
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 946b5a3986d64627993d563dfdbf7c19
	  System UUID:                946b5a39-86d6-4627-993d-563dfdbf7c19
	  Boot ID:                    14dacfab-6c8c-4adf-8510-4946d093b8f2
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-5dd5756b68-w9fx2                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-313436                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-313436             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-313436    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-btmmm                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-313436             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-cm878                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientPID
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-313436 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-313436 event: Registered Node default-k8s-diff-port-313436 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-313436 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-313436 event: Registered Node default-k8s-diff-port-313436 in Controller
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.053448] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.040276] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.665411] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.585502] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.646602] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.108325] systemd-fstab-generator[612]: Ignoring "noauto" option for root device
	[  +0.061948] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.067209] systemd-fstab-generator[624]: Ignoring "noauto" option for root device
	[  +0.219262] systemd-fstab-generator[638]: Ignoring "noauto" option for root device
	[  +0.160171] systemd-fstab-generator[650]: Ignoring "noauto" option for root device
	[  +0.269550] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +5.232409] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +0.076050] kauditd_printk_skb: 130 callbacks suppressed
	[Mar16 00:17] systemd-fstab-generator[898]: Ignoring "noauto" option for root device
	[  +5.607762] kauditd_printk_skb: 97 callbacks suppressed
	[  +2.459473] systemd-fstab-generator[1501]: Ignoring "noauto" option for root device
	[  +3.256275] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.798809] kauditd_printk_skb: 33 callbacks suppressed
	
	
	==> etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] <==
	{"level":"info","ts":"2024-03-16T00:17:22.229742Z","caller":"traceutil/trace.go:171","msg":"trace[1562147951] linearizableReadLoop","detail":"{readStateIndex:623; appliedIndex:622; }","duration":"333.529655ms","start":"2024-03-16T00:17:21.896193Z","end":"2024-03-16T00:17:22.229722Z","steps":["trace[1562147951] 'read index received'  (duration: 332.202486ms)","trace[1562147951] 'applied index is now lower than readState.Index'  (duration: 1.326409ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-16T00:17:22.229901Z","caller":"traceutil/trace.go:171","msg":"trace[251114440] transaction","detail":"{read_only:false; response_revision:591; number_of_response:1; }","duration":"369.524893ms","start":"2024-03-16T00:17:21.860364Z","end":"2024-03-16T00:17:22.229889Z","steps":["trace[251114440] 'process raft request'  (duration: 368.084543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:22.229924Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"333.727223ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" ","response":"range_response_count:1 size:5261"}
	{"level":"info","ts":"2024-03-16T00:17:22.230099Z","caller":"traceutil/trace.go:171","msg":"trace[2105319215] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-313436; range_end:; response_count:1; response_revision:591; }","duration":"333.915651ms","start":"2024-03-16T00:17:21.89617Z","end":"2024-03-16T00:17:22.230086Z","steps":["trace[2105319215] 'agreement among raft nodes before linearized reading'  (duration: 333.698748ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:17:22.230188Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:21.896156Z","time spent":"334.01955ms","remote":"127.0.0.1:55158","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5284,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" "}
	{"level":"warn","ts":"2024-03-16T00:17:22.230008Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-16T00:17:21.860348Z","time spent":"369.614921ms","remote":"127.0.0.1:55158","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5246,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" mod_revision:590 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" value_size:5178 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-313436\" > >"}
	{"level":"info","ts":"2024-03-16T00:27:04.896074Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":844}
	{"level":"info","ts":"2024-03-16T00:27:04.89829Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":844,"took":"1.560917ms","hash":3091162621}
	{"level":"info","ts":"2024-03-16T00:27:04.898364Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3091162621,"revision":844,"compact-revision":-1}
	{"level":"info","ts":"2024-03-16T00:32:04.904347Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1086}
	{"level":"info","ts":"2024-03-16T00:32:04.907145Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1086,"took":"1.716439ms","hash":3638989336}
	{"level":"info","ts":"2024-03-16T00:32:04.907248Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3638989336,"revision":1086,"compact-revision":844}
	{"level":"warn","ts":"2024-03-16T00:36:58.003177Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.750599ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14848242921832236497 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.198\" mod_revision:1558 > success:<request_put:<key:\"/registry/masterleases/192.168.72.198\" value_size:67 lease:5624870884977460687 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.198\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-16T00:36:58.003748Z","caller":"traceutil/trace.go:171","msg":"trace[81127177] linearizableReadLoop","detail":"{readStateIndex:1840; appliedIndex:1839; }","duration":"108.420976ms","start":"2024-03-16T00:36:57.895285Z","end":"2024-03-16T00:36:58.003706Z","steps":["trace[81127177] 'read index received'  (duration: 121.404µs)","trace[81127177] 'applied index is now lower than readState.Index'  (duration: 108.298026ms)"],"step_count":2}
	{"level":"info","ts":"2024-03-16T00:36:58.003871Z","caller":"traceutil/trace.go:171","msg":"trace[2021617001] transaction","detail":"{read_only:false; response_revision:1566; number_of_response:1; }","duration":"261.544545ms","start":"2024-03-16T00:36:57.742298Z","end":"2024-03-16T00:36:58.003842Z","steps":["trace[2021617001] 'process raft request'  (duration: 120.375595ms)","trace[2021617001] 'compare'  (duration: 139.621434ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-16T00:36:58.00401Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"108.819248ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:621"}
	{"level":"info","ts":"2024-03-16T00:36:58.005931Z","caller":"traceutil/trace.go:171","msg":"trace[364938942] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1566; }","duration":"109.388422ms","start":"2024-03-16T00:36:57.895173Z","end":"2024-03-16T00:36:58.004561Z","steps":["trace[364938942] 'agreement among raft nodes before linearized reading'  (duration: 108.719801ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:36:59.664547Z","caller":"traceutil/trace.go:171","msg":"trace[838278060] transaction","detail":"{read_only:false; response_revision:1568; number_of_response:1; }","duration":"180.389445ms","start":"2024-03-16T00:36:59.484144Z","end":"2024-03-16T00:36:59.664533Z","steps":["trace[838278060] 'process raft request'  (duration: 179.961975ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:37:04.916804Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1330}
	{"level":"info","ts":"2024-03-16T00:37:04.918175Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1330,"took":"1.070501ms","hash":3643064413}
	{"level":"info","ts":"2024-03-16T00:37:04.918341Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3643064413,"revision":1330,"compact-revision":1086}
	{"level":"info","ts":"2024-03-16T00:37:26.455375Z","caller":"traceutil/trace.go:171","msg":"trace[37125200] transaction","detail":"{read_only:false; response_revision:1591; number_of_response:1; }","duration":"181.866353ms","start":"2024-03-16T00:37:26.273477Z","end":"2024-03-16T00:37:26.455343Z","steps":["trace[37125200] 'process raft request'  (duration: 181.718143ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-16T00:38:06.891019Z","caller":"traceutil/trace.go:171","msg":"trace[926311271] transaction","detail":"{read_only:false; response_revision:1624; number_of_response:1; }","duration":"163.178531ms","start":"2024-03-16T00:38:06.727809Z","end":"2024-03-16T00:38:06.890988Z","steps":["trace[926311271] 'process raft request'  (duration: 162.669327ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-16T00:38:07.874968Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.113743ms","expected-duration":"100ms","prefix":"","request":"header:<ID:14848242921832236848 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.72.198\" mod_revision:1616 > success:<request_put:<key:\"/registry/masterleases/192.168.72.198\" value_size:67 lease:5624870884977461038 >> failure:<request_range:<key:\"/registry/masterleases/192.168.72.198\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-16T00:38:07.875062Z","caller":"traceutil/trace.go:171","msg":"trace[889148319] transaction","detail":"{read_only:false; response_revision:1625; number_of_response:1; }","duration":"180.127397ms","start":"2024-03-16T00:38:07.694922Z","end":"2024-03-16T00:38:07.87505Z","steps":["trace[889148319] 'process raft request'  (duration: 55.805032ms)","trace[889148319] 'compare'  (duration: 123.982536ms)"],"step_count":2}
	
	
	==> kernel <==
	 00:38:49 up 22 min,  0 users,  load average: 0.17, 0.14, 0.10
	Linux default-k8s-diff-port-313436 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] <==
	W0316 00:35:07.362864       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:35:07.362988       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:35:07.363019       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:36:06.296765       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0316 00:37:06.297180       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:37:06.364681       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:37:06.364847       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:37:06.365473       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:37:07.366111       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:37:07.366278       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:37:07.366358       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:37:07.366147       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:37:07.366637       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:37:07.368488       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0316 00:38:06.296876       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0316 00:38:07.366641       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:38:07.366793       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:38:07.366830       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:38:07.368867       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:38:07.368992       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:38:07.369050       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] <==
	I0316 00:33:37.987203       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.671µs"
	E0316 00:33:49.026806       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:33:49.584752       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:34:19.032155       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:34:19.595823       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:34:49.038169       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:34:49.604268       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:35:19.043180       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:35:19.615506       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:35:49.048247       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:35:49.624486       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:36:19.054192       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:36:19.633347       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:36:49.061055       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:36:49.641008       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:37:19.067383       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:37:19.654718       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:37:49.073289       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:37:49.674460       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:38:19.080890       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:38:19.682660       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:38:27.994220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="268.887µs"
	I0316 00:38:38.985749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="229.707µs"
	E0316 00:38:49.087273       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:38:49.691403       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] <==
	I0316 00:17:07.649911       1 server_others.go:69] "Using iptables proxy"
	I0316 00:17:07.660306       1 node.go:141] Successfully retrieved node IP: 192.168.72.198
	I0316 00:17:07.700454       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0316 00:17:07.700494       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:17:07.703213       1 server_others.go:152] "Using iptables Proxier"
	I0316 00:17:07.703275       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:17:07.703468       1 server.go:846] "Version info" version="v1.28.4"
	I0316 00:17:07.703496       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:17:07.704339       1 config.go:188] "Starting service config controller"
	I0316 00:17:07.704391       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:17:07.704412       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:17:07.704416       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:17:07.704923       1 config.go:315] "Starting node config controller"
	I0316 00:17:07.704955       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:17:07.805545       1 shared_informer.go:318] Caches are synced for node config
	I0316 00:17:07.805647       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:17:07.805742       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] <==
	I0316 00:17:03.986934       1 serving.go:348] Generated self-signed cert in-memory
	W0316 00:17:06.381764       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0316 00:17:06.383674       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:17:06.383791       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0316 00:17:06.383820       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0316 00:17:06.399284       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0316 00:17:06.399390       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:17:06.401127       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0316 00:17:06.401220       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0316 00:17:06.405807       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0316 00:17:06.401232       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0316 00:17:06.508522       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:36:22 default-k8s-diff-port-313436 kubelet[905]: E0316 00:36:22.969782     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:36:35 default-k8s-diff-port-313436 kubelet[905]: E0316 00:36:35.971394     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:36:48 default-k8s-diff-port-313436 kubelet[905]: E0316 00:36:48.970471     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:37:01 default-k8s-diff-port-313436 kubelet[905]: E0316 00:37:01.971798     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:37:02 default-k8s-diff-port-313436 kubelet[905]: E0316 00:37:02.000821     905 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:37:02 default-k8s-diff-port-313436 kubelet[905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:37:02 default-k8s-diff-port-313436 kubelet[905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:37:02 default-k8s-diff-port-313436 kubelet[905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:37:02 default-k8s-diff-port-313436 kubelet[905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:37:13 default-k8s-diff-port-313436 kubelet[905]: E0316 00:37:13.970427     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:37:25 default-k8s-diff-port-313436 kubelet[905]: E0316 00:37:25.969738     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:37:36 default-k8s-diff-port-313436 kubelet[905]: E0316 00:37:36.970257     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:37:51 default-k8s-diff-port-313436 kubelet[905]: E0316 00:37:51.971957     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:38:02 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:02.000280     905 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:38:02 default-k8s-diff-port-313436 kubelet[905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:38:02 default-k8s-diff-port-313436 kubelet[905]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:38:02 default-k8s-diff-port-313436 kubelet[905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:38:02 default-k8s-diff-port-313436 kubelet[905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:38:03 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:03.971225     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:38:17 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:17.008470     905 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 16 00:38:17 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:17.008657     905 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 16 00:38:17 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:17.009495     905 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-k9pt8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-cm878_kube-system(d239b608-f098-4a69-9863-7f7134523952): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 16 00:38:17 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:17.009674     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:38:27 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:27.970681     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	Mar 16 00:38:38 default-k8s-diff-port-313436 kubelet[905]: E0316 00:38:38.969519     905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cm878" podUID="d239b608-f098-4a69-9863-7f7134523952"
	
	
	==> storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] <==
	I0316 00:17:07.530247       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0316 00:17:37.533960       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] <==
	I0316 00:17:38.318292       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:17:38.326976       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:17:38.327078       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:17:38.337969       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:17:38.338425       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313436_765ba5e8-5a3e-47ea-bb2a-0184565770b1!
	I0316 00:17:38.340918       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2d337c78-eae8-4f4c-898f-77886111425a", APIVersion:"v1", ResourceVersion:"608", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-313436_765ba5e8-5a3e-47ea-bb2a-0184565770b1 became leader
	I0316 00:17:38.439665       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-313436_765ba5e8-5a3e-47ea-bb2a-0184565770b1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-cm878
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 describe pod metrics-server-57f55c9bc5-cm878
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-313436 describe pod metrics-server-57f55c9bc5-cm878: exit status 1 (95.093657ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cm878" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-313436 describe pod metrics-server-57f55c9bc5-cm878: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (491.48s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (245.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-238598 -n no-preload-238598
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-16 00:36:23.465530219 +0000 UTC m=+6013.888381491
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-238598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-238598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.574µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-238598 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-238598 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-238598 logs -n 25: (1.344800438s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:36 UTC | 16 Mar 24 00:36 UTC |
	| start   | -p newest-cni-143629 --memory=2200 --alsologtostderr   | newest-cni-143629            | jenkins | v1.32.0 | 16 Mar 24 00:36 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:36:23
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:36:23.669671  129288 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:36:23.669824  129288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:36:23.669838  129288 out.go:304] Setting ErrFile to fd 2...
	I0316 00:36:23.669845  129288 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:36:23.670043  129288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:36:23.670753  129288 out.go:298] Setting JSON to false
	I0316 00:36:23.671931  129288 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":11934,"bootTime":1710537450,"procs":212,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:36:23.671997  129288 start.go:139] virtualization: kvm guest
	I0316 00:36:23.674451  129288 out.go:177] * [newest-cni-143629] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:36:23.676132  129288 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:36:23.676238  129288 notify.go:220] Checking for updates...
	I0316 00:36:23.677672  129288 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:36:23.679054  129288 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:36:23.680368  129288 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:36:23.681645  129288 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:36:23.683078  129288 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:36:23.685107  129288 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:36:23.685287  129288 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:36:23.685440  129288 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:36:23.685572  129288 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:36:23.726953  129288 out.go:177] * Using the kvm2 driver based on user configuration
	I0316 00:36:23.728281  129288 start.go:297] selected driver: kvm2
	I0316 00:36:23.728307  129288 start.go:901] validating driver "kvm2" against <nil>
	I0316 00:36:23.728327  129288 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:36:23.729054  129288 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:36:23.729146  129288 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:36:23.746542  129288 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:36:23.746607  129288 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0316 00:36:23.746654  129288 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0316 00:36:23.746985  129288 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0316 00:36:23.747057  129288 cni.go:84] Creating CNI manager for ""
	I0316 00:36:23.747076  129288 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:36:23.747092  129288 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0316 00:36:23.747160  129288 start.go:340] cluster config:
	{Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:36:23.747293  129288 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:36:23.749111  129288 out.go:177] * Starting "newest-cni-143629" primary control-plane node in "newest-cni-143629" cluster
	I0316 00:36:23.750544  129288 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:36:23.750597  129288 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0316 00:36:23.750653  129288 cache.go:56] Caching tarball of preloaded images
	I0316 00:36:23.750780  129288 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:36:23.750794  129288 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0316 00:36:23.750947  129288 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/config.json ...
	I0316 00:36:23.750974  129288 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/newest-cni-143629/config.json: {Name:mk7d17dcbfa3d9db6ee5938dab28e82127cf1d08 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:36:23.751224  129288 start.go:360] acquireMachinesLock for newest-cni-143629: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:36:23.751276  129288 start.go:364] duration metric: took 29.848µs to acquireMachinesLock for "newest-cni-143629"
	I0316 00:36:23.751301  129288 start.go:93] Provisioning new machine with config: &{Name:newest-cni-143629 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-143629 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:36:23.751429  129288 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> CRI-O <==
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.172262482Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549384172220972,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d2b10f11-d719-44c3-a39f-61ba1a507633 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.172766256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d90c169c-8df9-4216-b5be-2b34e812719a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.172840082Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d90c169c-8df9-4216-b5be-2b34e812719a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.173206854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d90c169c-8df9-4216-b5be-2b34e812719a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.212310265Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ba0fc207-6358-47f9-b675-e64665ac4445 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.212400780Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ba0fc207-6358-47f9-b675-e64665ac4445 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.221699441Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6edecb78-f2f7-4f3f-9fba-012848eff54a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.222037687Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549384222018484,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6edecb78-f2f7-4f3f-9fba-012848eff54a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.223230544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=486e8884-8e38-4f83-8184-a690f256b684 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.223314014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=486e8884-8e38-4f83-8184-a690f256b684 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.223778901Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=486e8884-8e38-4f83-8184-a690f256b684 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.270898669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dc7307a8-f253-48b5-9e9d-9858382b9bb4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.270971326Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dc7307a8-f253-48b5-9e9d-9858382b9bb4 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.272206427Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a9ab21a0-2561-43d0-9dce-1c115c627b35 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.272642732Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549384272616575,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a9ab21a0-2561-43d0-9dce-1c115c627b35 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.273416186Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e9d3684a-5b03-47e4-8c08-7b88f3162e92 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.273527886Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e9d3684a-5b03-47e4-8c08-7b88f3162e92 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.274280522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e9d3684a-5b03-47e4-8c08-7b88f3162e92 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.314495327Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7f31a689-fa95-4386-ad1f-de45b9fb0628 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.314815979Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7f31a689-fa95-4386-ad1f-de45b9fb0628 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.315977927Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=599899d9-49c8-4c66-911d-7c216ef66b6b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.316547995Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549384316488246,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=599899d9-49c8-4c66-911d-7c216ef66b6b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.317431621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8af95820-65b1-4411-84db-e4289b1205ed name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.317487221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8af95820-65b1-4411-84db-e4289b1205ed name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:24 no-preload-238598 crio[695]: time="2024-03-16 00:36:24.317705926Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a,PodSandboxId:301a783629bcc6c723f4eaff21f188447e30af7f7ecd34e43a69106bcc6c3dd4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710548594802358845,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 60914654-d240-4165-b045-5b411d99e2e0,},Annotations:map[string]string{io.kubernetes.container.hash: 3fe01d2a,io.kubernetes.container.restartCount: 0,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d,PodSandboxId:a21b9eed0e13258953898b6e4f5d77984162d17621d7ce7df4046f1bce5f23c4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594186034953,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-wg5c8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a7347306-ab8d-42d0-935c-98f98192e6b7,},Annotations:map[string]string{io.kubernetes.container.hash: dd068cb3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP
\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64,PodSandboxId:9bdf745b2ba2ea7371ba9ed566a7e2da64d8696eb42a07d417e7d73263c3a8ea,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710548594052584516,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-5drh8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e8
6d8e6a-f832-4364-ac68-c69e40b92523,},Annotations:map[string]string{io.kubernetes.container.hash: 449925d6,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c,PodSandboxId:49b058addd673d7b1462d58fc894052cadef4008d0317813494ea9c64c1050c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:
1710548593584945125,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h6p8x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 738ca90e-7f8a-4449-8e5b-df714ee8320a,},Annotations:map[string]string{io.kubernetes.container.hash: abd578e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df,PodSandboxId:98f30e5ddf8e24c0d6c435ca5243a5229bba4ba51b8a2cf55097e42ffd399590,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710548574092156724,L
abels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2f841aae5b305433b44fb61546ba3c06,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4,PodSandboxId:0792bffad2469018a101a2d8ec96a0097d8af7d9767114307811d1855cb6fd15,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710548574093522611,Labels
:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe40ef889eafc3500f2f54a30348e295,},Annotations:map[string]string{io.kubernetes.container.hash: b4106e31,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc,PodSandboxId:16957d5bf895dd892c6d4a5e2a5e4d1ef6daab129f4dfb328751b1c304692b38,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710548574029033576,Labels:map[string]string{io.kubernetes.
container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a52dc9965dac3768fd9feb58806b292,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6,PodSandboxId:f06443b6f5e5c6797fee5e5d9b46142ceb2ab1645b099f54deb366a24bbce59e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710548574027194934,Labels:map[string]string{io.kubernetes.container.nam
e: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-238598,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 2ad073ffe7a3e400ba4e3a87cafbed54,},Annotations:map[string]string{io.kubernetes.container.hash: a7453714,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8af95820-65b1-4411-84db-e4289b1205ed name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	785ab7a84aef9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   301a783629bcc       storage-provisioner
	384d72cd0e231       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   a21b9eed0e132       coredns-76f75df574-wg5c8
	f77f69c426101       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   13 minutes ago      Running             coredns                   0                   9bdf745b2ba2e       coredns-76f75df574-5drh8
	5d72a7cc21406       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   13 minutes ago      Running             kube-proxy                0                   49b058addd673       kube-proxy-h6p8x
	88a3af391e8b6       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   13 minutes ago      Running             etcd                      2                   0792bffad2469       etcd-no-preload-238598
	b603efc4e9e65       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   13 minutes ago      Running             kube-controller-manager   2                   98f30e5ddf8e2       kube-controller-manager-no-preload-238598
	4ff55775eeb84       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   13 minutes ago      Running             kube-scheduler            2                   16957d5bf895d       kube-scheduler-no-preload-238598
	11395c3995c48       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   13 minutes ago      Running             kube-apiserver            2                   f06443b6f5e5c       kube-apiserver-no-preload-238598
	
	
	==> coredns [384d72cd0e23180d735521914099537f4ba10a43f449f3cdd85db3a2bcb3f72d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [f77f69c4261010703cc88cc2e9e24013f6006f7533bee9034b5e7ab58bf07f64] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> describe nodes <==
	Name:               no-preload-238598
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-238598
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e
	                    minikube.k8s.io/name=no-preload-238598
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 16 Mar 2024 00:22:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-238598
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 16 Mar 2024 00:36:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 16 Mar 2024 00:33:32 +0000   Sat, 16 Mar 2024 00:22:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 16 Mar 2024 00:33:32 +0000   Sat, 16 Mar 2024 00:22:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 16 Mar 2024 00:33:32 +0000   Sat, 16 Mar 2024 00:22:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 16 Mar 2024 00:33:32 +0000   Sat, 16 Mar 2024 00:23:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.137
	  Hostname:    no-preload-238598
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 8403e337d3114d4f95c9de93d0441895
	  System UUID:                8403e337-d311-4d4f-95c9-de93d0441895
	  Boot ID:                    80bd4afb-43a5-4e2c-b6c7-cd172769a008
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-5drh8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-76f75df574-wg5c8                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-no-preload-238598                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-238598             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-238598    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-h6p8x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-no-preload-238598             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-j5k5h              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node no-preload-238598 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node no-preload-238598 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node no-preload-238598 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeNotReady             13m   kubelet          Node no-preload-238598 status is now: NodeNotReady
	  Normal  NodeReady                13m   kubelet          Node no-preload-238598 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node no-preload-238598 event: Registered Node no-preload-238598 in Controller
	
	
	==> dmesg <==
	[  +0.056167] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.044615] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.003320] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.845161] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.768343] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.241338] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.057078] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.065533] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.200030] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.109040] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.276038] systemd-fstab-generator[680]: Ignoring "noauto" option for root device
	[ +17.291274] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.063342] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.424231] systemd-fstab-generator[1317]: Ignoring "noauto" option for root device
	[Mar16 00:18] kauditd_printk_skb: 94 callbacks suppressed
	[  +5.419587] kauditd_printk_skb: 69 callbacks suppressed
	[Mar16 00:22] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.580029] systemd-fstab-generator[3834]: Ignoring "noauto" option for root device
	[  +4.608804] kauditd_printk_skb: 53 callbacks suppressed
	[  +2.700655] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	[Mar16 00:23] systemd-fstab-generator[4344]: Ignoring "noauto" option for root device
	[  +0.088538] kauditd_printk_skb: 14 callbacks suppressed
	[Mar16 00:24] kauditd_printk_skb: 78 callbacks suppressed
	
	
	==> etcd [88a3af391e8b61058d24daf26d01fa5d127eda7b7ee1b821c2556a913a21fff4] <==
	{"level":"info","ts":"2024-03-16T00:22:54.501813Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"53f1e4b6b2bc3c92","initial-advertise-peer-urls":["https://192.168.50.137:2380"],"listen-peer-urls":["https://192.168.50.137:2380"],"advertise-client-urls":["https://192.168.50.137:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.137:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-16T00:22:54.50457Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-16T00:22:54.500194Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-16T00:22:54.504689Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.137:2380"}
	{"level":"info","ts":"2024-03-16T00:22:55.303503Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-16T00:22:55.30357Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-16T00:22:55.3036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgPreVoteResp from 53f1e4b6b2bc3c92 at term 1"}
	{"level":"info","ts":"2024-03-16T00:22:55.303613Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became candidate at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.303618Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 received MsgVoteResp from 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.303626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"53f1e4b6b2bc3c92 became leader at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.303634Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 53f1e4b6b2bc3c92 elected leader 53f1e4b6b2bc3c92 at term 2"}
	{"level":"info","ts":"2024-03-16T00:22:55.308355Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"53f1e4b6b2bc3c92","local-member-attributes":"{Name:no-preload-238598 ClientURLs:[https://192.168.50.137:2379]}","request-path":"/0/members/53f1e4b6b2bc3c92/attributes","cluster-id":"7ac1a4431768b343","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-16T00:22:55.308417Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:22:55.308484Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-16T00:22:55.319804Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.137:2379"}
	{"level":"info","ts":"2024-03-16T00:22:55.320178Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.321513Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-16T00:22:55.321564Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-16T00:22:55.324208Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7ac1a4431768b343","local-member-id":"53f1e4b6b2bc3c92","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.324308Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.32436Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-16T00:22:55.325775Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-16T00:32:55.389192Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":678}
	{"level":"info","ts":"2024-03-16T00:32:55.392772Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":678,"took":"2.552027ms","hash":1990720988}
	{"level":"info","ts":"2024-03-16T00:32:55.392891Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1990720988,"revision":678,"compact-revision":-1}
	
	
	==> kernel <==
	 00:36:24 up 19 min,  0 users,  load average: 0.24, 0.19, 0.16
	Linux no-preload-238598 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [11395c3995c48d2c7e8978faeaa48016acd29f0f34733204edaa665e2298d7a6] <==
	I0316 00:30:57.935980       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:32:56.940412       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:32:56.940702       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0316 00:32:57.941260       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:32:57.941327       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:32:57.941393       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:32:57.941275       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:32:57.941481       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:32:57.942624       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:33:57.942068       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:33:57.942355       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:33:57.942384       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:33:57.943280       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:33:57.943386       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:33:57.943415       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:35:57.942724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:35:57.943002       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0316 00:35:57.943032       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0316 00:35:57.944257       1 handler_proxy.go:93] no RequestInfo found in the context
	E0316 00:35:57.944432       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0316 00:35:57.944462       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [b603efc4e9e656b4c921271f215ea68d19dc1548591641a6b855594b48c4f4df] <==
	I0316 00:30:42.683174       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:31:12.189075       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:31:12.691461       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:31:42.196025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:31:42.700682       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:32:12.202641       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:32:12.710881       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:32:42.209015       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:32:42.719447       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:33:12.215062       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:33:12.729026       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:33:42.220323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:33:42.737622       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:34:08.695439       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="257.272µs"
	E0316 00:34:12.226647       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:34:12.748907       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0316 00:34:22.698197       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="710.351µs"
	E0316 00:34:42.232432       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:34:42.763621       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:35:12.238709       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:35:12.771965       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:35:42.245921       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:35:42.779520       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0316 00:36:12.254244       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0316 00:36:12.787021       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5d72a7cc214069940ab31ade4f9a3392a2f61b80f31efc33f93d02379b86c03c] <==
	I0316 00:23:13.893839       1 server_others.go:72] "Using iptables proxy"
	I0316 00:23:14.022855       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.137"]
	I0316 00:23:14.462251       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0316 00:23:14.462309       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0316 00:23:14.462328       1 server_others.go:168] "Using iptables Proxier"
	I0316 00:23:14.478738       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0316 00:23:14.478993       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0316 00:23:14.479030       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0316 00:23:14.488714       1 config.go:188] "Starting service config controller"
	I0316 00:23:14.488770       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0316 00:23:14.488789       1 config.go:97] "Starting endpoint slice config controller"
	I0316 00:23:14.488793       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0316 00:23:14.501434       1 config.go:315] "Starting node config controller"
	I0316 00:23:14.501481       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0316 00:23:14.591215       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0316 00:23:14.591274       1 shared_informer.go:318] Caches are synced for service config
	I0316 00:23:14.601664       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [4ff55775eeb84d72aa695297bbbc178e53673c097cde65fdecc743731c4211cc] <==
	W0316 00:22:56.969783       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0316 00:22:56.969790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0316 00:22:56.971466       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0316 00:22:56.971505       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0316 00:22:57.774650       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0316 00:22:57.774703       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0316 00:22:57.777042       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0316 00:22:57.777089       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0316 00:22:57.783440       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:57.783487       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:57.812324       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0316 00:22:57.812352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0316 00:22:57.906439       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:57.906489       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:57.932445       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0316 00:22:57.932553       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0316 00:22:58.059189       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:58.059313       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:58.074720       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0316 00:22:58.074769       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0316 00:22:58.154664       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0316 00:22:58.154721       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0316 00:22:58.237807       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0316 00:22:58.237873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0316 00:23:00.949011       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 16 00:34:00 no-preload-238598 kubelet[4166]: E0316 00:34:00.712945    4166 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:34:00 no-preload-238598 kubelet[4166]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:34:00 no-preload-238598 kubelet[4166]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:34:00 no-preload-238598 kubelet[4166]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:34:00 no-preload-238598 kubelet[4166]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:34:08 no-preload-238598 kubelet[4166]: E0316 00:34:08.676269    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:34:22 no-preload-238598 kubelet[4166]: E0316 00:34:22.675715    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:34:37 no-preload-238598 kubelet[4166]: E0316 00:34:37.674932    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:34:51 no-preload-238598 kubelet[4166]: E0316 00:34:51.674198    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:35:00 no-preload-238598 kubelet[4166]: E0316 00:35:00.711601    4166 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:35:00 no-preload-238598 kubelet[4166]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:35:00 no-preload-238598 kubelet[4166]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:35:00 no-preload-238598 kubelet[4166]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:35:00 no-preload-238598 kubelet[4166]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:35:04 no-preload-238598 kubelet[4166]: E0316 00:35:04.675654    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:35:19 no-preload-238598 kubelet[4166]: E0316 00:35:19.675360    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:35:33 no-preload-238598 kubelet[4166]: E0316 00:35:33.674649    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:35:47 no-preload-238598 kubelet[4166]: E0316 00:35:47.675730    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:36:00 no-preload-238598 kubelet[4166]: E0316 00:36:00.712074    4166 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 16 00:36:00 no-preload-238598 kubelet[4166]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 16 00:36:00 no-preload-238598 kubelet[4166]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 16 00:36:00 no-preload-238598 kubelet[4166]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 16 00:36:00 no-preload-238598 kubelet[4166]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 16 00:36:02 no-preload-238598 kubelet[4166]: E0316 00:36:02.674796    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	Mar 16 00:36:17 no-preload-238598 kubelet[4166]: E0316 00:36:17.675166    4166 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-j5k5h" podUID="cbdf6082-83fb-4af6-95e9-90545e64c898"
	
	
	==> storage-provisioner [785ab7a84aef93921889ad055cd3a0de23088be49381f32365668a6fb06d1c3a] <==
	I0316 00:23:14.976879       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0316 00:23:14.986757       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0316 00:23:14.987058       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0316 00:23:14.995242       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0316 00:23:14.995704       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-238598_9d73a6bf-16e1-40ec-b905-c21f9b3c4d26!
	I0316 00:23:15.001432       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef3c01a2-febc-4977-aec7-0a7a64617505", APIVersion:"v1", ResourceVersion:"423", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-238598_9d73a6bf-16e1-40ec-b905-c21f9b3c4d26 became leader
	I0316 00:23:15.096744       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-238598_9d73a6bf-16e1-40ec-b905-c21f9b3c4d26!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-238598 -n no-preload-238598
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-238598 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-j5k5h
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-238598 describe pod metrics-server-57f55c9bc5-j5k5h
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-238598 describe pod metrics-server-57f55c9bc5-j5k5h: exit status 1 (72.704976ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-j5k5h" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-238598 describe pod metrics-server-57f55c9bc5-j5k5h: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (245.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (109.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.39.107:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.39.107:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (244.92256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-402923" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-402923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-402923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.283µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-402923 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (229.567131ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-402923 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-402923 logs -n 25: (1.548875133s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| ssh     | cert-options-313368 ssh                                | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-313368 -- sudo                         | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-313368                                 | cert-options-313368          | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:06 UTC |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:06 UTC | 16 Mar 24 00:09 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-982877                              | cert-expiration-982877       | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	| delete  | -p                                                     | disable-driver-mounts-183652 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:08 UTC |
	|         | disable-driver-mounts-183652                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:08 UTC | 16 Mar 24 00:09 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-238598             | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-666637            | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-313436  | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC | 16 Mar 24 00:09 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:09 UTC |                     |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-402923        | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-238598                  | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-666637                 | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-238598                                   | no-preload-238598            | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:23 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-666637                                  | embed-certs-666637           | jenkins | v1.32.0 | 16 Mar 24 00:11 UTC | 16 Mar 24 00:21 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-313436       | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-313436 | jenkins | v1.32.0 | 16 Mar 24 00:12 UTC | 16 Mar 24 00:21 UTC |
	|         | default-k8s-diff-port-313436                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-402923             | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC | 16 Mar 24 00:13 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-402923                              | old-k8s-version-402923       | jenkins | v1.32.0 | 16 Mar 24 00:13 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/16 00:13:05
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0316 00:13:05.158815  124077 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:13:05.159121  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159133  124077 out.go:304] Setting ErrFile to fd 2...
	I0316 00:13:05.159144  124077 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:13:05.159353  124077 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:13:05.159899  124077 out.go:298] Setting JSON to false
	I0316 00:13:05.160799  124077 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":10535,"bootTime":1710537450,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:13:05.160863  124077 start.go:139] virtualization: kvm guest
	I0316 00:13:05.163240  124077 out.go:177] * [old-k8s-version-402923] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:13:05.164761  124077 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:13:05.164791  124077 notify.go:220] Checking for updates...
	I0316 00:13:05.166326  124077 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:13:05.167585  124077 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:13:05.168973  124077 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:13:05.170153  124077 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:13:05.171266  124077 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:13:05.172816  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:13:05.173249  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.173289  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.188538  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39657
	I0316 00:13:05.188917  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.189453  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.189479  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.189829  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.190019  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.191868  124077 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0316 00:13:05.193083  124077 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:13:05.193404  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:13:05.193443  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:13:05.207840  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43807
	I0316 00:13:05.208223  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:13:05.208683  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:13:05.208711  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:13:05.209041  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:13:05.209224  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:13:05.243299  124077 out.go:177] * Using the kvm2 driver based on existing profile
	I0316 00:13:05.244618  124077 start.go:297] selected driver: kvm2
	I0316 00:13:05.244640  124077 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.244792  124077 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:13:05.245450  124077 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.245509  124077 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0316 00:13:05.260046  124077 install.go:137] /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2 version is 1.32.0
	I0316 00:13:05.260437  124077 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:13:05.260510  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:13:05.260524  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:13:05.260561  124077 start.go:340] cluster config:
	{Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:13:05.260734  124077 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0316 00:13:05.263633  124077 out.go:177] * Starting "old-k8s-version-402923" primary control-plane node in "old-k8s-version-402923" cluster
	I0316 00:13:00.891560  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:05.265113  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:13:05.265154  124077 preload.go:147] Found local preload: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0316 00:13:05.265170  124077 cache.go:56] Caching tarball of preloaded images
	I0316 00:13:05.265244  124077 preload.go:173] Found /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0316 00:13:05.265254  124077 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0316 00:13:05.265353  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:13:05.265534  124077 start.go:360] acquireMachinesLock for old-k8s-version-402923: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:13:06.971548  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:10.043616  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:16.123615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:19.195641  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:25.275569  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:28.347627  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:34.427628  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:37.499621  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:43.579636  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:46.651611  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:52.731602  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:13:55.803555  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:01.883545  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:04.955579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:11.035610  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:14.107615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:20.187606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:23.259572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:29.339575  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:32.411617  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:38.491587  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:41.563659  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:47.643582  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:50.715565  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:56.795596  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:14:59.867614  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:05.947572  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:09.019585  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:15.099606  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:18.171563  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:24.251589  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:27.323592  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:33.403599  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:36.475652  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:42.555600  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:45.627577  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:51.707630  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:15:54.779625  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:00.859579  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:03.931626  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:10.011762  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:13.083615  123454 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.50.137:22: connect: no route to host
	I0316 00:16:16.087122  123537 start.go:364] duration metric: took 4m28.254030119s to acquireMachinesLock for "embed-certs-666637"
	I0316 00:16:16.087211  123537 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:16.087224  123537 fix.go:54] fixHost starting: 
	I0316 00:16:16.087613  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:16.087653  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:16.102371  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37447
	I0316 00:16:16.102813  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:16.103305  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:16.103343  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:16.103693  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:16.103874  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:16.104010  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:16.105752  123537 fix.go:112] recreateIfNeeded on embed-certs-666637: state=Stopped err=<nil>
	I0316 00:16:16.105780  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	W0316 00:16:16.105959  123537 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:16.107881  123537 out.go:177] * Restarting existing kvm2 VM for "embed-certs-666637" ...
	I0316 00:16:16.109056  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Start
	I0316 00:16:16.109231  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring networks are active...
	I0316 00:16:16.110036  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network default is active
	I0316 00:16:16.110372  123537 main.go:141] libmachine: (embed-certs-666637) Ensuring network mk-embed-certs-666637 is active
	I0316 00:16:16.110782  123537 main.go:141] libmachine: (embed-certs-666637) Getting domain xml...
	I0316 00:16:16.111608  123537 main.go:141] libmachine: (embed-certs-666637) Creating domain...
	I0316 00:16:17.296901  123537 main.go:141] libmachine: (embed-certs-666637) Waiting to get IP...
	I0316 00:16:17.297746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.298129  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.298317  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.298111  124543 retry.go:31] will retry after 269.98852ms: waiting for machine to come up
	I0316 00:16:17.569866  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.570322  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.570349  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.570278  124543 retry.go:31] will retry after 244.711835ms: waiting for machine to come up
	I0316 00:16:16.084301  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:16.084359  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084699  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:16:16.084726  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:16:16.084970  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:16:16.086868  123454 machine.go:97] duration metric: took 4m35.39093995s to provisionDockerMachine
	I0316 00:16:16.087007  123454 fix.go:56] duration metric: took 4m35.413006758s for fixHost
	I0316 00:16:16.087038  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 4m35.413320023s
	W0316 00:16:16.087068  123454 start.go:713] error starting host: provision: host is not running
	W0316 00:16:16.087236  123454 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0316 00:16:16.087249  123454 start.go:728] Will try again in 5 seconds ...
	I0316 00:16:17.816747  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:17.817165  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:17.817196  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:17.817109  124543 retry.go:31] will retry after 326.155242ms: waiting for machine to come up
	I0316 00:16:18.144611  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.145047  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.145081  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.145000  124543 retry.go:31] will retry after 464.805158ms: waiting for machine to come up
	I0316 00:16:18.611746  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:18.612105  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:18.612140  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:18.612039  124543 retry.go:31] will retry after 593.718495ms: waiting for machine to come up
	I0316 00:16:19.208024  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.208444  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.208476  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.208379  124543 retry.go:31] will retry after 772.07702ms: waiting for machine to come up
	I0316 00:16:19.982326  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:19.982800  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:19.982827  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:19.982706  124543 retry.go:31] will retry after 846.887476ms: waiting for machine to come up
	I0316 00:16:20.830726  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:20.831144  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:20.831168  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:20.831098  124543 retry.go:31] will retry after 1.274824907s: waiting for machine to come up
	I0316 00:16:22.107855  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:22.108252  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:22.108278  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:22.108209  124543 retry.go:31] will retry after 1.41217789s: waiting for machine to come up
	I0316 00:16:21.088013  123454 start.go:360] acquireMachinesLock for no-preload-238598: {Name:mk0262afd87806fa1c563f43ca618d569f9ce09a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0316 00:16:23.522725  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:23.523143  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:23.523179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:23.523094  124543 retry.go:31] will retry after 1.567285216s: waiting for machine to come up
	I0316 00:16:25.092539  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:25.092954  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:25.092981  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:25.092941  124543 retry.go:31] will retry after 2.260428679s: waiting for machine to come up
	I0316 00:16:27.354650  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:27.355051  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:27.355082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:27.354990  124543 retry.go:31] will retry after 2.402464465s: waiting for machine to come up
	I0316 00:16:29.758774  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:29.759220  123537 main.go:141] libmachine: (embed-certs-666637) DBG | unable to find current IP address of domain embed-certs-666637 in network mk-embed-certs-666637
	I0316 00:16:29.759253  123537 main.go:141] libmachine: (embed-certs-666637) DBG | I0316 00:16:29.759176  124543 retry.go:31] will retry after 3.63505234s: waiting for machine to come up
	I0316 00:16:34.648552  123819 start.go:364] duration metric: took 4m4.062008179s to acquireMachinesLock for "default-k8s-diff-port-313436"
	I0316 00:16:34.648628  123819 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:34.648638  123819 fix.go:54] fixHost starting: 
	I0316 00:16:34.649089  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:34.649134  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:34.667801  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46281
	I0316 00:16:34.668234  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:34.668737  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:16:34.668768  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:34.669123  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:34.669349  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:34.669552  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:16:34.671100  123819 fix.go:112] recreateIfNeeded on default-k8s-diff-port-313436: state=Stopped err=<nil>
	I0316 00:16:34.671139  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	W0316 00:16:34.671297  123819 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:34.673738  123819 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-313436" ...
	I0316 00:16:34.675120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Start
	I0316 00:16:34.675292  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring networks are active...
	I0316 00:16:34.676038  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network default is active
	I0316 00:16:34.676427  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Ensuring network mk-default-k8s-diff-port-313436 is active
	I0316 00:16:34.676855  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Getting domain xml...
	I0316 00:16:34.677501  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Creating domain...
	I0316 00:16:33.397686  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398274  123537 main.go:141] libmachine: (embed-certs-666637) Found IP for machine: 192.168.61.91
	I0316 00:16:33.398301  123537 main.go:141] libmachine: (embed-certs-666637) Reserving static IP address...
	I0316 00:16:33.398319  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has current primary IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.398829  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.398859  123537 main.go:141] libmachine: (embed-certs-666637) DBG | skip adding static IP to network mk-embed-certs-666637 - found existing host DHCP lease matching {name: "embed-certs-666637", mac: "52:54:00:14:3c:c6", ip: "192.168.61.91"}
	I0316 00:16:33.398883  123537 main.go:141] libmachine: (embed-certs-666637) Reserved static IP address: 192.168.61.91
	I0316 00:16:33.398896  123537 main.go:141] libmachine: (embed-certs-666637) Waiting for SSH to be available...
	I0316 00:16:33.398905  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Getting to WaitForSSH function...
	I0316 00:16:33.401376  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.401835  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.401872  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.402054  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH client type: external
	I0316 00:16:33.402082  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa (-rw-------)
	I0316 00:16:33.402113  123537 main.go:141] libmachine: (embed-certs-666637) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.91 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:33.402141  123537 main.go:141] libmachine: (embed-certs-666637) DBG | About to run SSH command:
	I0316 00:16:33.402188  123537 main.go:141] libmachine: (embed-certs-666637) DBG | exit 0
	I0316 00:16:33.523353  123537 main.go:141] libmachine: (embed-certs-666637) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:33.523747  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetConfigRaw
	I0316 00:16:33.524393  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.526639  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527046  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.527080  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.527278  123537 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/config.json ...
	I0316 00:16:33.527509  123537 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:33.527527  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:33.527766  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.529906  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530179  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.530210  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.530341  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.530596  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530816  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.530953  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.531119  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.531334  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.531348  123537 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:33.635573  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:33.635601  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.635879  123537 buildroot.go:166] provisioning hostname "embed-certs-666637"
	I0316 00:16:33.635905  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.636109  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.638998  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639369  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.639417  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.639629  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.639795  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.639971  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.640103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.640366  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.640524  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.640543  123537 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-666637 && echo "embed-certs-666637" | sudo tee /etc/hostname
	I0316 00:16:33.757019  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-666637
	
	I0316 00:16:33.757049  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.759808  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760120  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.760154  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.760375  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.760583  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760723  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.760829  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.760951  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:33.761121  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:33.761144  123537 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-666637' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-666637/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-666637' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:33.873548  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:33.873587  123537 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:33.873642  123537 buildroot.go:174] setting up certificates
	I0316 00:16:33.873654  123537 provision.go:84] configureAuth start
	I0316 00:16:33.873666  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetMachineName
	I0316 00:16:33.873986  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:33.876609  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.876976  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.877004  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.877194  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.879624  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880156  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.880185  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.880300  123537 provision.go:143] copyHostCerts
	I0316 00:16:33.880359  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:33.880370  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:33.880441  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:33.880526  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:33.880534  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:33.880558  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:33.880625  123537 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:33.880632  123537 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:33.880653  123537 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:33.880707  123537 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.embed-certs-666637 san=[127.0.0.1 192.168.61.91 embed-certs-666637 localhost minikube]
	I0316 00:16:33.984403  123537 provision.go:177] copyRemoteCerts
	I0316 00:16:33.984471  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:33.984499  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:33.987297  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987711  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:33.987741  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:33.987894  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:33.988108  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:33.988284  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:33.988456  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.069540  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:34.094494  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0316 00:16:34.119198  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:34.144669  123537 provision.go:87] duration metric: took 271.000471ms to configureAuth
	I0316 00:16:34.144701  123537 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:34.144891  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:34.144989  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.148055  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148464  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.148496  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.148710  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.148918  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149097  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.149251  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.149416  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.149580  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.149596  123537 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:34.414026  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:34.414058  123537 machine.go:97] duration metric: took 886.536134ms to provisionDockerMachine
	I0316 00:16:34.414070  123537 start.go:293] postStartSetup for "embed-certs-666637" (driver="kvm2")
	I0316 00:16:34.414081  123537 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:34.414101  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.414464  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:34.414497  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.417211  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417482  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.417520  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.417617  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.417804  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.417990  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.418126  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.498223  123537 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:34.502954  123537 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:34.502989  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:34.503068  123537 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:34.503156  123537 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:34.503258  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:34.513065  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:34.537606  123537 start.go:296] duration metric: took 123.521431ms for postStartSetup
	I0316 00:16:34.537657  123537 fix.go:56] duration metric: took 18.450434099s for fixHost
	I0316 00:16:34.537679  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.540574  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.540908  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.540950  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.541086  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.541302  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541471  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.541609  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.541803  123537 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:34.542009  123537 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.91 22 <nil> <nil>}
	I0316 00:16:34.542025  123537 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:34.648381  123537 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548194.613058580
	
	I0316 00:16:34.648419  123537 fix.go:216] guest clock: 1710548194.613058580
	I0316 00:16:34.648427  123537 fix.go:229] Guest: 2024-03-16 00:16:34.61305858 +0000 UTC Remote: 2024-03-16 00:16:34.537661993 +0000 UTC m=+286.854063579 (delta=75.396587ms)
	I0316 00:16:34.648454  123537 fix.go:200] guest clock delta is within tolerance: 75.396587ms
	I0316 00:16:34.648459  123537 start.go:83] releasing machines lock for "embed-certs-666637", held for 18.561300744s
	I0316 00:16:34.648483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.648770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:34.651350  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651748  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.651794  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.651926  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652573  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652810  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:34.652907  123537 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:34.652965  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.653064  123537 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:34.653090  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:34.655796  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656121  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656149  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656170  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656281  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656461  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.656562  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:34.656586  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:34.656640  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.656739  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:34.656807  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.656883  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:34.657023  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:34.657249  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:34.759596  123537 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:34.765571  123537 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:34.915897  123537 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:34.923372  123537 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:34.923471  123537 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:34.940579  123537 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:34.940613  123537 start.go:494] detecting cgroup driver to use...
	I0316 00:16:34.940699  123537 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:34.957640  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:34.971525  123537 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:34.971598  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:34.987985  123537 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:35.001952  123537 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:35.124357  123537 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:35.273948  123537 docker.go:233] disabling docker service ...
	I0316 00:16:35.274037  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:35.291073  123537 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:35.311209  123537 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:35.460630  123537 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:35.581263  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:35.596460  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:35.617992  123537 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:35.618042  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.628372  123537 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:35.628426  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.639487  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.650397  123537 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:35.662065  123537 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:35.676003  123537 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:35.686159  123537 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:35.686241  123537 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:35.699814  123537 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:35.710182  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:35.831831  123537 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:35.977556  123537 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:35.977638  123537 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:35.982729  123537 start.go:562] Will wait 60s for crictl version
	I0316 00:16:35.982806  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:16:35.986695  123537 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:36.023299  123537 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:36.023412  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.055441  123537 ssh_runner.go:195] Run: crio --version
	I0316 00:16:36.090313  123537 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:36.091622  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetIP
	I0316 00:16:36.094687  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095062  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:36.095098  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:36.095277  123537 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:36.099781  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:36.113522  123537 kubeadm.go:877] updating cluster {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:36.113674  123537 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:36.113743  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:36.152208  123537 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:36.152300  123537 ssh_runner.go:195] Run: which lz4
	I0316 00:16:36.156802  123537 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:36.161430  123537 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:36.161472  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:35.911510  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting to get IP...
	I0316 00:16:35.912562  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.912986  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:35.913064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:35.912955  124655 retry.go:31] will retry after 248.147893ms: waiting for machine to come up
	I0316 00:16:36.162476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163094  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.163127  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.163032  124655 retry.go:31] will retry after 387.219214ms: waiting for machine to come up
	I0316 00:16:36.551678  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552203  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.552236  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.552178  124655 retry.go:31] will retry after 391.385671ms: waiting for machine to come up
	I0316 00:16:36.945741  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:36.946275  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:36.946216  124655 retry.go:31] will retry after 470.449619ms: waiting for machine to come up
	I0316 00:16:37.417836  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418324  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.418353  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.418259  124655 retry.go:31] will retry after 508.962644ms: waiting for machine to come up
	I0316 00:16:37.929194  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929710  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:37.929743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:37.929671  124655 retry.go:31] will retry after 877.538639ms: waiting for machine to come up
	I0316 00:16:38.808551  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809061  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:38.809100  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:38.809002  124655 retry.go:31] will retry after 754.319242ms: waiting for machine to come up
	I0316 00:16:39.565060  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565475  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:39.565512  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:39.565411  124655 retry.go:31] will retry after 1.472475348s: waiting for machine to come up
	I0316 00:16:37.946470  123537 crio.go:444] duration metric: took 1.789700065s to copy over tarball
	I0316 00:16:37.946552  123537 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:40.497841  123537 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.551257887s)
	I0316 00:16:40.497867  123537 crio.go:451] duration metric: took 2.551367803s to extract the tarball
	I0316 00:16:40.497875  123537 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:40.539695  123537 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:40.588945  123537 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:40.588974  123537 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:40.588983  123537 kubeadm.go:928] updating node { 192.168.61.91 8443 v1.28.4 crio true true} ...
	I0316 00:16:40.589125  123537 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-666637 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.91
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:40.589216  123537 ssh_runner.go:195] Run: crio config
	I0316 00:16:40.641673  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:40.641702  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:40.641719  123537 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:40.641754  123537 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.91 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-666637 NodeName:embed-certs-666637 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.91"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.91 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:40.641939  123537 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.91
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-666637"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.91
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.91"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:40.642024  123537 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:40.652461  123537 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:40.652539  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:40.662114  123537 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0316 00:16:40.679782  123537 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:40.701982  123537 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I0316 00:16:40.720088  123537 ssh_runner.go:195] Run: grep 192.168.61.91	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:40.724199  123537 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.91	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:40.737133  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:40.860343  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:40.878437  123537 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637 for IP: 192.168.61.91
	I0316 00:16:40.878466  123537 certs.go:194] generating shared ca certs ...
	I0316 00:16:40.878489  123537 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:40.878690  123537 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:40.878766  123537 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:40.878779  123537 certs.go:256] generating profile certs ...
	I0316 00:16:40.878888  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/client.key
	I0316 00:16:40.878990  123537 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key.07955952
	I0316 00:16:40.879059  123537 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key
	I0316 00:16:40.879178  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:40.879225  123537 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:40.879239  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:40.879271  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:40.879302  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:40.879352  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:40.879409  123537 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:40.880141  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:40.924047  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:40.962441  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:41.000283  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:41.034353  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0316 00:16:41.069315  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:16:41.100325  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:16:41.129285  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/embed-certs-666637/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:16:41.155899  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:16:41.180657  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:16:41.205961  123537 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:16:41.231886  123537 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:16:41.249785  123537 ssh_runner.go:195] Run: openssl version
	I0316 00:16:41.255703  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:16:41.266968  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271536  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.271595  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:16:41.277460  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:16:41.288854  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:16:41.300302  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305189  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.305256  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:16:41.311200  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:16:41.322784  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:16:41.334879  123537 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339774  123537 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.339837  123537 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:16:41.345746  123537 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:16:41.357661  123537 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:16:41.362469  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:16:41.368875  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:16:41.375759  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:16:41.382518  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:16:41.388629  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:16:41.394882  123537 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:16:41.401114  123537 kubeadm.go:391] StartCluster: {Name:embed-certs-666637 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-666637 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:16:41.401243  123537 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:16:41.401304  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.449499  123537 cri.go:89] found id: ""
	I0316 00:16:41.449590  123537 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:16:41.461139  123537 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:16:41.461165  123537 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:16:41.461173  123537 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:16:41.461243  123537 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:16:41.473648  123537 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:16:41.474652  123537 kubeconfig.go:125] found "embed-certs-666637" server: "https://192.168.61.91:8443"
	I0316 00:16:41.476724  123537 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:16:41.488387  123537 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.91
	I0316 00:16:41.488426  123537 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:16:41.488439  123537 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:16:41.488485  123537 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:16:41.526197  123537 cri.go:89] found id: ""
	I0316 00:16:41.526283  123537 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:16:41.545489  123537 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:16:41.555977  123537 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:16:41.555998  123537 kubeadm.go:156] found existing configuration files:
	
	I0316 00:16:41.556048  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:16:41.565806  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:16:41.565891  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:16:41.575646  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:16:41.585269  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:16:41.585329  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:16:41.595336  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.605081  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:16:41.605144  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:16:41.615182  123537 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:16:41.624781  123537 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:16:41.624837  123537 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:16:41.634852  123537 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:16:41.644749  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.748782  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.477775  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.688730  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:41.039441  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039924  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:41.039965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:41.039885  124655 retry.go:31] will retry after 1.408692905s: waiting for machine to come up
	I0316 00:16:42.449971  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450402  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:42.450434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:42.450355  124655 retry.go:31] will retry after 1.539639877s: waiting for machine to come up
	I0316 00:16:43.992314  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992833  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:43.992869  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:43.992777  124655 retry.go:31] will retry after 2.297369864s: waiting for machine to come up
	I0316 00:16:42.777223  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:42.944089  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:16:42.944193  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.445082  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.945117  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:16:43.963812  123537 api_server.go:72] duration metric: took 1.019723734s to wait for apiserver process to appear ...
	I0316 00:16:43.963845  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:16:43.963871  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.924208  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.924258  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.924278  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.953212  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.953245  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:46.964449  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:46.988201  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:16:46.988232  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:16:47.464502  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.469385  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.469421  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:47.964483  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:47.970448  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:16:47.970492  123537 api_server.go:103] status: https://192.168.61.91:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:16:48.463984  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:16:48.468908  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:16:48.476120  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:16:48.476153  123537 api_server.go:131] duration metric: took 4.512298176s to wait for apiserver health ...
	I0316 00:16:48.476164  123537 cni.go:84] Creating CNI manager for ""
	I0316 00:16:48.476172  123537 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:48.478076  123537 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:16:48.479565  123537 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:16:48.490129  123537 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:16:48.516263  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:16:48.532732  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:16:48.532768  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:16:48.532778  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:16:48.532788  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:16:48.532795  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:16:48.532801  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:16:48.532808  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:16:48.532815  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:16:48.532822  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:16:48.532833  123537 system_pods.go:74] duration metric: took 16.547677ms to wait for pod list to return data ...
	I0316 00:16:48.532845  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:16:48.535945  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:16:48.535989  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:16:48.536006  123537 node_conditions.go:105] duration metric: took 3.154184ms to run NodePressure ...
	I0316 00:16:48.536027  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:16:48.733537  123537 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739166  123537 kubeadm.go:733] kubelet initialised
	I0316 00:16:48.739196  123537 kubeadm.go:734] duration metric: took 5.63118ms waiting for restarted kubelet to initialise ...
	I0316 00:16:48.739209  123537 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:48.744724  123537 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.750261  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750299  123537 pod_ready.go:81] duration metric: took 5.547917ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.750310  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.750323  123537 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.755340  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755362  123537 pod_ready.go:81] duration metric: took 5.029639ms for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.755371  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "etcd-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.755379  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.761104  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761128  123537 pod_ready.go:81] duration metric: took 5.740133ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.761138  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.761146  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:48.921215  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921244  123537 pod_ready.go:81] duration metric: took 160.08501ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:48.921254  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:48.921260  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.319922  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319954  123537 pod_ready.go:81] duration metric: took 398.685799ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.319963  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-proxy-8fpc5" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.319969  123537 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:49.720866  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720922  123537 pod_ready.go:81] duration metric: took 400.944023ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:49.720948  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:49.720967  123537 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:50.120836  123537 pod_ready.go:97] node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120865  123537 pod_ready.go:81] duration metric: took 399.883676ms for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:16:50.120875  123537 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-666637" hosting pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:50.120882  123537 pod_ready.go:38] duration metric: took 1.381661602s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:50.120923  123537 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:16:50.133619  123537 ops.go:34] apiserver oom_adj: -16
	I0316 00:16:50.133653  123537 kubeadm.go:591] duration metric: took 8.672472438s to restartPrimaryControlPlane
	I0316 00:16:50.133663  123537 kubeadm.go:393] duration metric: took 8.732557685s to StartCluster
	I0316 00:16:50.133684  123537 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.133760  123537 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:16:50.135355  123537 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:50.135613  123537 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.91 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:16:50.140637  123537 out.go:177] * Verifying Kubernetes components...
	I0316 00:16:50.135727  123537 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:16:50.135843  123537 config.go:182] Loaded profile config "embed-certs-666637": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:50.142015  123537 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:50.142027  123537 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-666637"
	I0316 00:16:50.142050  123537 addons.go:69] Setting default-storageclass=true in profile "embed-certs-666637"
	I0316 00:16:50.142070  123537 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-666637"
	W0316 00:16:50.142079  123537 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:16:50.142090  123537 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-666637"
	I0316 00:16:50.142092  123537 addons.go:69] Setting metrics-server=true in profile "embed-certs-666637"
	I0316 00:16:50.142121  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142124  123537 addons.go:234] Setting addon metrics-server=true in "embed-certs-666637"
	W0316 00:16:50.142136  123537 addons.go:243] addon metrics-server should already be in state true
	I0316 00:16:50.142168  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.142439  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142468  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142503  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.142558  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.142577  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.156773  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39333
	I0316 00:16:50.156804  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34721
	I0316 00:16:50.157267  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157268  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.157591  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0316 00:16:50.157835  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157841  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.157857  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157858  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.157925  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.158223  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158226  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.158404  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.158419  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.158731  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158753  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158795  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.158828  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.158932  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.159126  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.162347  123537 addons.go:234] Setting addon default-storageclass=true in "embed-certs-666637"
	W0316 00:16:50.162365  123537 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:16:50.162392  123537 host.go:66] Checking if "embed-certs-666637" exists ...
	I0316 00:16:50.162612  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.162649  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.172299  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33135
	I0316 00:16:50.172676  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.173173  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.173193  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.173547  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.173770  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.175668  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.177676  123537 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:16:50.175968  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36553
	I0316 00:16:50.176110  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40081
	I0316 00:16:50.179172  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:16:50.179189  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:16:50.179206  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.179453  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179538  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.179888  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.179909  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180021  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.180037  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.180266  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180385  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.180613  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.180788  123537 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:50.180811  123537 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:50.185060  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.192504  123537 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:16:46.292804  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293326  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:46.293363  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:46.293267  124655 retry.go:31] will retry after 2.301997121s: waiting for machine to come up
	I0316 00:16:48.596337  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596777  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | unable to find current IP address of domain default-k8s-diff-port-313436 in network mk-default-k8s-diff-port-313436
	I0316 00:16:48.596805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | I0316 00:16:48.596731  124655 retry.go:31] will retry after 3.159447069s: waiting for machine to come up
	I0316 00:16:50.186146  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.186717  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.193945  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.193971  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.194051  123537 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.194079  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:16:50.194100  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.194103  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.194264  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.194420  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.196511  123537 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37021
	I0316 00:16:50.197160  123537 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:50.197580  123537 main.go:141] libmachine: Using API Version  1
	I0316 00:16:50.197598  123537 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:50.197658  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198007  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.198039  123537 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:50.198038  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.198235  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetState
	I0316 00:16:50.198237  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.198435  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.198612  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.198772  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.200270  123537 main.go:141] libmachine: (embed-certs-666637) Calling .DriverName
	I0316 00:16:50.200540  123537 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.200554  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:16:50.200566  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHHostname
	I0316 00:16:50.203147  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203634  123537 main.go:141] libmachine: (embed-certs-666637) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:14:3c:c6", ip: ""} in network mk-embed-certs-666637: {Iface:virbr3 ExpiryTime:2024-03-16 01:16:27 +0000 UTC Type:0 Mac:52:54:00:14:3c:c6 Iaid: IPaddr:192.168.61.91 Prefix:24 Hostname:embed-certs-666637 Clientid:01:52:54:00:14:3c:c6}
	I0316 00:16:50.203655  123537 main.go:141] libmachine: (embed-certs-666637) DBG | domain embed-certs-666637 has defined IP address 192.168.61.91 and MAC address 52:54:00:14:3c:c6 in network mk-embed-certs-666637
	I0316 00:16:50.203765  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHPort
	I0316 00:16:50.203966  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHKeyPath
	I0316 00:16:50.204201  123537 main.go:141] libmachine: (embed-certs-666637) Calling .GetSSHUsername
	I0316 00:16:50.204335  123537 sshutil.go:53] new ssh client: &{IP:192.168.61.91 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/embed-certs-666637/id_rsa Username:docker}
	I0316 00:16:50.317046  123537 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:50.340203  123537 node_ready.go:35] waiting up to 6m0s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:50.415453  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:16:50.423732  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:16:50.424648  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:16:50.424663  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:16:50.470134  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:16:50.470164  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:16:50.518806  123537 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:50.518833  123537 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:16:50.570454  123537 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:16:51.627153  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.203388401s)
	I0316 00:16:51.627211  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627222  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627419  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.211925303s)
	I0316 00:16:51.627468  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627483  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627533  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627595  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627609  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627620  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627549  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627859  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.627885  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.627895  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.627914  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.627956  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.627976  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.629345  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.629320  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.633811  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.633831  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.634043  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.634081  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726400  123537 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.15588774s)
	I0316 00:16:51.726458  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726472  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.726820  123537 main.go:141] libmachine: (embed-certs-666637) DBG | Closing plugin on server side
	I0316 00:16:51.726853  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.726875  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.726889  123537 main.go:141] libmachine: Making call to close driver server
	I0316 00:16:51.726898  123537 main.go:141] libmachine: (embed-certs-666637) Calling .Close
	I0316 00:16:51.727178  123537 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:16:51.727193  123537 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:16:51.727206  123537 addons.go:470] Verifying addon metrics-server=true in "embed-certs-666637"
	I0316 00:16:51.729277  123537 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0316 00:16:51.730645  123537 addons.go:505] duration metric: took 1.594919212s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0316 00:16:52.344107  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:53.260401  124077 start.go:364] duration metric: took 3m47.994815506s to acquireMachinesLock for "old-k8s-version-402923"
	I0316 00:16:53.260473  124077 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:16:53.260480  124077 fix.go:54] fixHost starting: 
	I0316 00:16:53.260822  124077 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:16:53.260863  124077 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:16:53.276786  124077 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37373
	I0316 00:16:53.277183  124077 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:16:53.277711  124077 main.go:141] libmachine: Using API Version  1
	I0316 00:16:53.277745  124077 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:16:53.278155  124077 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:16:53.278619  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:16:53.278811  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetState
	I0316 00:16:53.280276  124077 fix.go:112] recreateIfNeeded on old-k8s-version-402923: state=Stopped err=<nil>
	I0316 00:16:53.280314  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	W0316 00:16:53.280527  124077 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:16:53.282576  124077 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-402923" ...
	I0316 00:16:51.757133  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757570  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Found IP for machine: 192.168.72.198
	I0316 00:16:51.757603  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has current primary IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.757616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserving static IP address...
	I0316 00:16:51.758067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.758093  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | skip adding static IP to network mk-default-k8s-diff-port-313436 - found existing host DHCP lease matching {name: "default-k8s-diff-port-313436", mac: "52:54:00:cc:b2:59", ip: "192.168.72.198"}
	I0316 00:16:51.758110  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Reserved static IP address: 192.168.72.198
	I0316 00:16:51.758120  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Getting to WaitForSSH function...
	I0316 00:16:51.758138  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Waiting for SSH to be available...
	I0316 00:16:51.760276  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760596  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.760632  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.760711  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH client type: external
	I0316 00:16:51.760744  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa (-rw-------)
	I0316 00:16:51.760797  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.198 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:16:51.760820  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | About to run SSH command:
	I0316 00:16:51.760861  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | exit 0
	I0316 00:16:51.887432  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | SSH cmd err, output: <nil>: 
	I0316 00:16:51.887829  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetConfigRaw
	I0316 00:16:51.888471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:51.891514  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.891923  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.891949  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.892232  123819 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/config.json ...
	I0316 00:16:51.892502  123819 machine.go:94] provisionDockerMachine start ...
	I0316 00:16:51.892527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:51.892782  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:51.895025  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:51.895367  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:51.895483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:51.895683  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895841  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:51.895969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:51.896178  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:51.896361  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:51.896372  123819 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:16:52.012107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:16:52.012154  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012405  123819 buildroot.go:166] provisioning hostname "default-k8s-diff-port-313436"
	I0316 00:16:52.012434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.012640  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.015307  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.015823  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.015847  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.016055  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.016266  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016433  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.016565  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.016758  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.016976  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.016992  123819 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-313436 && echo "default-k8s-diff-port-313436" | sudo tee /etc/hostname
	I0316 00:16:52.149152  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-313436
	
	I0316 00:16:52.149180  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.152472  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.152852  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.152896  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.153056  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.153239  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153412  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.153616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.153837  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.154077  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.154108  123819 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-313436' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-313436/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-313436' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:16:52.285258  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:16:52.285290  123819 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:16:52.285313  123819 buildroot.go:174] setting up certificates
	I0316 00:16:52.285323  123819 provision.go:84] configureAuth start
	I0316 00:16:52.285331  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetMachineName
	I0316 00:16:52.285631  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:52.288214  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288494  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.288527  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.288699  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.290965  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291354  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.291380  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.291571  123819 provision.go:143] copyHostCerts
	I0316 00:16:52.291644  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:16:52.291658  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:16:52.291719  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:16:52.291827  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:16:52.291839  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:16:52.291868  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:16:52.291966  123819 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:16:52.291978  123819 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:16:52.292005  123819 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:16:52.292095  123819 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-313436 san=[127.0.0.1 192.168.72.198 default-k8s-diff-port-313436 localhost minikube]
	I0316 00:16:52.536692  123819 provision.go:177] copyRemoteCerts
	I0316 00:16:52.536756  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:16:52.536790  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.539525  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.539805  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.539837  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.540067  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.540264  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.540424  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.540599  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:52.629139  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:16:52.655092  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0316 00:16:52.681372  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:16:52.706496  123819 provision.go:87] duration metric: took 421.160351ms to configureAuth
	I0316 00:16:52.706529  123819 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:16:52.706737  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:16:52.706828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:52.709743  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710173  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:52.710198  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:52.710403  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:52.710616  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710822  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:52.710983  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:52.711148  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:52.711359  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:52.711380  123819 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:16:53.005107  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:16:53.005138  123819 machine.go:97] duration metric: took 1.112619102s to provisionDockerMachine
	I0316 00:16:53.005153  123819 start.go:293] postStartSetup for "default-k8s-diff-port-313436" (driver="kvm2")
	I0316 00:16:53.005166  123819 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:16:53.005185  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.005547  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:16:53.005581  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.008749  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009170  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.009196  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.009416  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.009617  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.009795  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.009973  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.100468  123819 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:16:53.105158  123819 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:16:53.105181  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:16:53.105243  123819 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:16:53.105314  123819 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:16:53.105399  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:16:53.116078  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:53.142400  123819 start.go:296] duration metric: took 137.231635ms for postStartSetup
	I0316 00:16:53.142454  123819 fix.go:56] duration metric: took 18.493815855s for fixHost
	I0316 00:16:53.142483  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.145282  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145658  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.145688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.145878  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.146104  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146288  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.146445  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.146625  123819 main.go:141] libmachine: Using SSH client type: native
	I0316 00:16:53.146820  123819 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.198 22 <nil> <nil>}
	I0316 00:16:53.146834  123819 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:16:53.260232  123819 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548213.237261690
	
	I0316 00:16:53.260255  123819 fix.go:216] guest clock: 1710548213.237261690
	I0316 00:16:53.260262  123819 fix.go:229] Guest: 2024-03-16 00:16:53.23726169 +0000 UTC Remote: 2024-03-16 00:16:53.142460792 +0000 UTC m=+262.706636561 (delta=94.800898ms)
	I0316 00:16:53.260292  123819 fix.go:200] guest clock delta is within tolerance: 94.800898ms
	I0316 00:16:53.260298  123819 start.go:83] releasing machines lock for "default-k8s-diff-port-313436", held for 18.611697781s
	I0316 00:16:53.260323  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.260629  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:53.263641  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264002  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.264032  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.264243  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.264889  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:16:53.265217  123819 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:16:53.265273  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.265404  123819 ssh_runner.go:195] Run: cat /version.json
	I0316 00:16:53.265434  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:16:53.268274  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268538  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268684  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268727  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.268960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.268969  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:53.268995  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:53.269113  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269206  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:16:53.269298  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:16:53.269419  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.269476  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:16:53.269572  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:16:53.372247  123819 ssh_runner.go:195] Run: systemctl --version
	I0316 00:16:53.378643  123819 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:16:53.527036  123819 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:16:53.534220  123819 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:16:53.534312  123819 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:16:53.554856  123819 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:16:53.554900  123819 start.go:494] detecting cgroup driver to use...
	I0316 00:16:53.554971  123819 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:16:53.580723  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:16:53.599919  123819 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:16:53.599996  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:16:53.613989  123819 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:16:53.628748  123819 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:16:53.745409  123819 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:16:53.906668  123819 docker.go:233] disabling docker service ...
	I0316 00:16:53.906733  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:16:53.928452  123819 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:16:53.949195  123819 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:16:54.118868  123819 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:16:54.250006  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:16:54.264754  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:16:54.285825  123819 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:16:54.285890  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.298522  123819 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:16:54.298590  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.311118  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.323928  123819 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:16:54.336128  123819 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:16:54.348715  123819 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:16:54.359657  123819 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:16:54.359718  123819 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:16:54.376411  123819 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:16:54.388136  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:54.530444  123819 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:16:54.681895  123819 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:16:54.681984  123819 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:16:54.687334  123819 start.go:562] Will wait 60s for crictl version
	I0316 00:16:54.687398  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:16:54.691443  123819 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:16:54.730408  123819 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:16:54.730505  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.761591  123819 ssh_runner.go:195] Run: crio --version
	I0316 00:16:54.792351  123819 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0316 00:16:53.284071  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .Start
	I0316 00:16:53.284282  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring networks are active...
	I0316 00:16:53.284979  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network default is active
	I0316 00:16:53.285414  124077 main.go:141] libmachine: (old-k8s-version-402923) Ensuring network mk-old-k8s-version-402923 is active
	I0316 00:16:53.285909  124077 main.go:141] libmachine: (old-k8s-version-402923) Getting domain xml...
	I0316 00:16:53.286763  124077 main.go:141] libmachine: (old-k8s-version-402923) Creating domain...
	I0316 00:16:54.602594  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting to get IP...
	I0316 00:16:54.603578  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.604006  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.604070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.603967  124818 retry.go:31] will retry after 219.174944ms: waiting for machine to come up
	I0316 00:16:54.825360  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:54.825772  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:54.825802  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:54.825716  124818 retry.go:31] will retry after 377.238163ms: waiting for machine to come up
	I0316 00:16:54.793693  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetIP
	I0316 00:16:54.797023  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797439  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:16:54.797471  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:16:54.797665  123819 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0316 00:16:54.802065  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:54.815168  123819 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:16:54.815285  123819 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0316 00:16:54.815345  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:54.855493  123819 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0316 00:16:54.855553  123819 ssh_runner.go:195] Run: which lz4
	I0316 00:16:54.860096  123819 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:16:54.865644  123819 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:16:54.865675  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0316 00:16:54.345117  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:56.346342  123537 node_ready.go:53] node "embed-certs-666637" has status "Ready":"False"
	I0316 00:16:57.346164  123537 node_ready.go:49] node "embed-certs-666637" has status "Ready":"True"
	I0316 00:16:57.346194  123537 node_ready.go:38] duration metric: took 7.005950923s for node "embed-certs-666637" to be "Ready" ...
	I0316 00:16:57.346207  123537 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:16:57.361331  123537 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377726  123537 pod_ready.go:92] pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace has status "Ready":"True"
	I0316 00:16:57.377750  123537 pod_ready.go:81] duration metric: took 16.388353ms for pod "coredns-5dd5756b68-t8xb4" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:57.377760  123537 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:16:55.204396  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.204938  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.204976  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.204858  124818 retry.go:31] will retry after 396.26515ms: waiting for machine to come up
	I0316 00:16:55.602628  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:55.603188  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:55.603215  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:55.603141  124818 retry.go:31] will retry after 566.334663ms: waiting for machine to come up
	I0316 00:16:56.170958  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.171556  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.171594  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.171506  124818 retry.go:31] will retry after 722.874123ms: waiting for machine to come up
	I0316 00:16:56.896535  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:56.897045  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:56.897080  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:56.896973  124818 retry.go:31] will retry after 626.623162ms: waiting for machine to come up
	I0316 00:16:57.525440  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:57.525975  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:57.526005  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:57.525928  124818 retry.go:31] will retry after 999.741125ms: waiting for machine to come up
	I0316 00:16:58.527590  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:58.528070  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:58.528104  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:58.528014  124818 retry.go:31] will retry after 959.307038ms: waiting for machine to come up
	I0316 00:16:59.488631  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:16:59.489038  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:16:59.489073  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:16:59.488971  124818 retry.go:31] will retry after 1.638710264s: waiting for machine to come up
	I0316 00:16:56.676506  123819 crio.go:444] duration metric: took 1.816442841s to copy over tarball
	I0316 00:16:56.676609  123819 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:16:59.338617  123819 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.661966532s)
	I0316 00:16:59.338655  123819 crio.go:451] duration metric: took 2.662115388s to extract the tarball
	I0316 00:16:59.338665  123819 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:16:59.387693  123819 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:16:59.453534  123819 crio.go:496] all images are preloaded for cri-o runtime.
	I0316 00:16:59.453565  123819 cache_images.go:84] Images are preloaded, skipping loading
	I0316 00:16:59.453575  123819 kubeadm.go:928] updating node { 192.168.72.198 8444 v1.28.4 crio true true} ...
	I0316 00:16:59.453744  123819 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-313436 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.198
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:16:59.453841  123819 ssh_runner.go:195] Run: crio config
	I0316 00:16:59.518492  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:16:59.518525  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:16:59.518543  123819 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:16:59.518572  123819 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.198 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-313436 NodeName:default-k8s-diff-port-313436 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.198"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.198 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:16:59.518791  123819 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.198
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-313436"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.198
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.198"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:16:59.518876  123819 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0316 00:16:59.529778  123819 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:16:59.529860  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:16:59.542186  123819 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I0316 00:16:59.563037  123819 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:16:59.585167  123819 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2172 bytes)
	I0316 00:16:59.607744  123819 ssh_runner.go:195] Run: grep 192.168.72.198	control-plane.minikube.internal$ /etc/hosts
	I0316 00:16:59.612687  123819 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.198	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:16:59.628607  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:16:59.767487  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:16:59.786494  123819 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436 for IP: 192.168.72.198
	I0316 00:16:59.786520  123819 certs.go:194] generating shared ca certs ...
	I0316 00:16:59.786545  123819 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:16:59.786688  123819 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:16:59.786722  123819 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:16:59.786728  123819 certs.go:256] generating profile certs ...
	I0316 00:16:59.786827  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.key
	I0316 00:16:59.786975  123819 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key.254d5830
	I0316 00:16:59.787049  123819 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key
	I0316 00:16:59.787204  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:16:59.787248  123819 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:16:59.787262  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:16:59.787295  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:16:59.787351  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:16:59.787386  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:16:59.787449  123819 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:16:59.788288  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:16:59.824257  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:16:59.859470  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:16:59.904672  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:16:59.931832  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0316 00:16:59.965654  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:00.006949  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:00.039120  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:00.071341  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:00.095585  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:00.122165  123819 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:00.149982  123819 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:00.170019  123819 ssh_runner.go:195] Run: openssl version
	I0316 00:17:00.176232  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:00.188738  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193708  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.193780  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:00.200433  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:00.215116  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:00.228871  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234074  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.234141  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:00.240553  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:00.252454  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:00.264690  123819 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269493  123819 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.269573  123819 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:00.275584  123819 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:00.287859  123819 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:00.292474  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:00.298744  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:00.304793  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:00.311156  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:00.317777  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:00.324148  123819 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:00.330667  123819 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-313436 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-313436 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:00.330763  123819 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:00.330813  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.374868  123819 cri.go:89] found id: ""
	I0316 00:17:00.374961  123819 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:00.386218  123819 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:00.386240  123819 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:00.386245  123819 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:00.386288  123819 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:00.397129  123819 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:00.398217  123819 kubeconfig.go:125] found "default-k8s-diff-port-313436" server: "https://192.168.72.198:8444"
	I0316 00:17:00.400506  123819 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:00.411430  123819 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.198
	I0316 00:17:00.411462  123819 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:00.411477  123819 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:00.411528  123819 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:00.448545  123819 cri.go:89] found id: ""
	I0316 00:17:00.448619  123819 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:00.469230  123819 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:00.480622  123819 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:00.480644  123819 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:00.480695  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0316 00:16:59.384420  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.094272  123537 pod_ready.go:102] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:02.390117  123537 pod_ready.go:92] pod "etcd-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.390145  123537 pod_ready.go:81] duration metric: took 5.012377671s for pod "etcd-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.390156  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398207  123537 pod_ready.go:92] pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.398236  123537 pod_ready.go:81] duration metric: took 8.071855ms for pod "kube-apiserver-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.398248  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405415  123537 pod_ready.go:92] pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.405443  123537 pod_ready.go:81] duration metric: took 7.186495ms for pod "kube-controller-manager-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.405453  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412646  123537 pod_ready.go:92] pod "kube-proxy-8fpc5" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.412665  123537 pod_ready.go:81] duration metric: took 7.204465ms for pod "kube-proxy-8fpc5" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.412673  123537 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606336  123537 pod_ready.go:92] pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:02.606369  123537 pod_ready.go:81] duration metric: took 193.687951ms for pod "kube-scheduler-embed-certs-666637" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:02.606384  123537 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:01.129465  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:01.129960  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:01.129990  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:01.129903  124818 retry.go:31] will retry after 2.005172311s: waiting for machine to come up
	I0316 00:17:03.136657  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:03.137177  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:03.137204  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:03.137110  124818 retry.go:31] will retry after 2.208820036s: waiting for machine to come up
	I0316 00:17:00.492088  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:00.743504  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:00.756322  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0316 00:17:00.766476  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:00.766545  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:00.776849  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.786610  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:00.786676  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:00.797455  123819 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0316 00:17:00.808026  123819 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:00.808083  123819 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:00.819306  123819 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:00.834822  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:00.962203  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.535753  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.762322  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.843195  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:01.944855  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:01.944971  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.446047  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.945791  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:02.983641  123819 api_server.go:72] duration metric: took 1.038786332s to wait for apiserver process to appear ...
	I0316 00:17:02.983680  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:02.983704  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:04.615157  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:07.114447  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:06.343729  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.343763  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.343786  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.364621  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:17:06.364659  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:17:06.483852  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.491403  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.491433  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:06.983931  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:06.994258  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:06.994296  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.483821  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.506265  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:17:07.506301  123819 api_server.go:103] status: https://192.168.72.198:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:17:07.983846  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:17:07.988700  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:17:07.995996  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:17:07.996021  123819 api_server.go:131] duration metric: took 5.012333318s to wait for apiserver health ...
	I0316 00:17:07.996032  123819 cni.go:84] Creating CNI manager for ""
	I0316 00:17:07.996041  123819 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:07.998091  123819 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:17:07.999628  123819 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:17:08.010263  123819 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:17:08.041667  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:17:08.053611  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:17:08.053656  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:17:08.053668  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:17:08.053681  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:17:08.053694  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:17:08.053706  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:17:08.053717  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:17:08.053730  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:17:08.053739  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:17:08.053747  123819 system_pods.go:74] duration metric: took 12.054433ms to wait for pod list to return data ...
	I0316 00:17:08.053763  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:17:08.057781  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:17:08.057808  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:17:08.057818  123819 node_conditions.go:105] duration metric: took 4.047698ms to run NodePressure ...
	I0316 00:17:08.057837  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:08.282870  123819 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288338  123819 kubeadm.go:733] kubelet initialised
	I0316 00:17:08.288359  123819 kubeadm.go:734] duration metric: took 5.456436ms waiting for restarted kubelet to initialise ...
	I0316 00:17:08.288367  123819 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:08.294256  123819 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.302762  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302802  123819 pod_ready.go:81] duration metric: took 8.523485ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.302814  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.302823  123819 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.309581  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309604  123819 pod_ready.go:81] duration metric: took 6.77179ms for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.309617  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.309625  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.315399  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315419  123819 pod_ready.go:81] duration metric: took 5.78558ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.315428  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.315434  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.445776  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445808  123819 pod_ready.go:81] duration metric: took 130.363739ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.445821  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.445829  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:08.846181  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846228  123819 pod_ready.go:81] duration metric: took 400.382095ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:08.846243  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-proxy-btmmm" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:08.846251  123819 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.245568  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245599  123819 pod_ready.go:81] duration metric: took 399.329058ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.245612  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.245618  123819 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:09.646855  123819 pod_ready.go:97] node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646888  123819 pod_ready.go:81] duration metric: took 401.262603ms for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:17:09.646901  123819 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-313436" hosting pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:09.646909  123819 pod_ready.go:38] duration metric: took 1.358531936s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:09.646926  123819 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:17:09.659033  123819 ops.go:34] apiserver oom_adj: -16
	I0316 00:17:09.659059  123819 kubeadm.go:591] duration metric: took 9.272806311s to restartPrimaryControlPlane
	I0316 00:17:09.659070  123819 kubeadm.go:393] duration metric: took 9.328414192s to StartCluster
	I0316 00:17:09.659091  123819 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.659166  123819 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:09.661439  123819 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:09.661729  123819 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.72.198 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:17:09.663462  123819 out.go:177] * Verifying Kubernetes components...
	I0316 00:17:09.661800  123819 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:17:09.661986  123819 config.go:182] Loaded profile config "default-k8s-diff-port-313436": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:17:09.664841  123819 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:09.664874  123819 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664839  123819 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.664964  123819 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.664980  123819 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:17:09.664847  123819 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-313436"
	I0316 00:17:09.665023  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.665037  123819 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.665053  123819 addons.go:243] addon metrics-server should already be in state true
	I0316 00:17:09.665084  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.664922  123819 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-313436"
	I0316 00:17:09.665349  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665377  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665445  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665474  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.665607  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.665637  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.680337  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44861
	I0316 00:17:09.680351  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42895
	I0316 00:17:09.680799  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.680939  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.681331  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681366  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681541  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.681560  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.681736  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.681974  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.682359  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682407  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.682461  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.682494  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.683660  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37013
	I0316 00:17:09.684088  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.684575  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.684600  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.684992  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.685218  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.688973  123819 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-313436"
	W0316 00:17:09.688994  123819 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:17:09.689028  123819 host.go:66] Checking if "default-k8s-diff-port-313436" exists ...
	I0316 00:17:09.689372  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.689397  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.698126  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44333
	I0316 00:17:09.698527  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.699052  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.699079  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.699407  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.699606  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.700389  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35313
	I0316 00:17:09.700824  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.701308  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.701327  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.701610  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.701681  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.704168  123819 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:17:09.701891  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.704403  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44829
	I0316 00:17:09.706042  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:17:09.706076  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:17:09.706102  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.706988  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.707805  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.707831  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.708465  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.708556  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.709451  123819 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:09.709500  123819 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:09.709520  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.711354  123819 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:05.349216  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:05.349685  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:05.349718  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:05.349622  124818 retry.go:31] will retry after 2.862985007s: waiting for machine to come up
	I0316 00:17:08.214613  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:08.215206  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | unable to find current IP address of domain old-k8s-version-402923 in network mk-old-k8s-version-402923
	I0316 00:17:08.215242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | I0316 00:17:08.215145  124818 retry.go:31] will retry after 3.529812379s: waiting for machine to come up
	I0316 00:17:09.709911  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.710103  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.712849  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.712865  123819 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:09.712886  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:17:09.712910  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.713010  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.713202  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.713365  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.715688  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716029  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.716064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.716260  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.716437  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.716662  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.716826  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.725309  123819 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34363
	I0316 00:17:09.725659  123819 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:09.726175  123819 main.go:141] libmachine: Using API Version  1
	I0316 00:17:09.726191  123819 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:09.726492  123819 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:09.726665  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetState
	I0316 00:17:09.728459  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .DriverName
	I0316 00:17:09.728721  123819 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.728739  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:17:09.728753  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHHostname
	I0316 00:17:09.732122  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732546  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cc:b2:59", ip: ""} in network mk-default-k8s-diff-port-313436: {Iface:virbr4 ExpiryTime:2024-03-16 01:09:03 +0000 UTC Type:0 Mac:52:54:00:cc:b2:59 Iaid: IPaddr:192.168.72.198 Prefix:24 Hostname:default-k8s-diff-port-313436 Clientid:01:52:54:00:cc:b2:59}
	I0316 00:17:09.732576  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | domain default-k8s-diff-port-313436 has defined IP address 192.168.72.198 and MAC address 52:54:00:cc:b2:59 in network mk-default-k8s-diff-port-313436
	I0316 00:17:09.732733  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHPort
	I0316 00:17:09.732908  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHKeyPath
	I0316 00:17:09.733064  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .GetSSHUsername
	I0316 00:17:09.733206  123819 sshutil.go:53] new ssh client: &{IP:192.168.72.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/default-k8s-diff-port-313436/id_rsa Username:docker}
	I0316 00:17:09.838182  123819 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:09.857248  123819 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:09.956751  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:17:09.956775  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:17:09.982142  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:17:09.992293  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:17:09.992319  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:17:10.000878  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:17:10.035138  123819 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:10.035171  123819 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:17:10.066721  123819 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:17:11.153759  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.171576504s)
	I0316 00:17:11.153815  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.153828  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154237  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154241  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154262  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.154271  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.154281  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.154569  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.154601  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.154609  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165531  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.165579  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.165868  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.165922  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.165879  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536530  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.469764101s)
	I0316 00:17:11.536596  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536607  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536648  123819 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.53572281s)
	I0316 00:17:11.536694  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.536713  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.536960  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.536963  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536988  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.536995  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537001  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537005  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537010  123819 main.go:141] libmachine: Making call to close driver server
	I0316 00:17:11.537013  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537019  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) Calling .Close
	I0316 00:17:11.537338  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537346  123819 main.go:141] libmachine: (default-k8s-diff-port-313436) DBG | Closing plugin on server side
	I0316 00:17:11.537218  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537365  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.537376  123819 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-313436"
	I0316 00:17:11.537404  123819 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:17:11.537425  123819 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:17:11.539481  123819 out.go:177] * Enabled addons: default-storageclass, metrics-server, storage-provisioner
	I0316 00:17:09.114699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:11.613507  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:13.204814  123454 start.go:364] duration metric: took 52.116735477s to acquireMachinesLock for "no-preload-238598"
	I0316 00:17:13.204888  123454 start.go:96] Skipping create...Using existing machine configuration
	I0316 00:17:13.204900  123454 fix.go:54] fixHost starting: 
	I0316 00:17:13.205405  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:17:13.205446  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:17:13.222911  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46769
	I0316 00:17:13.223326  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:17:13.223784  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:17:13.223811  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:17:13.224153  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:17:13.224338  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:13.224507  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:17:13.226028  123454 fix.go:112] recreateIfNeeded on no-preload-238598: state=Stopped err=<nil>
	I0316 00:17:13.226051  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	W0316 00:17:13.226232  123454 fix.go:138] unexpected machine state, will restart: <nil>
	I0316 00:17:13.227865  123454 out.go:177] * Restarting existing kvm2 VM for "no-preload-238598" ...
	I0316 00:17:11.749327  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749801  124077 main.go:141] libmachine: (old-k8s-version-402923) Found IP for machine: 192.168.39.107
	I0316 00:17:11.749826  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has current primary IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.749834  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserving static IP address...
	I0316 00:17:11.750286  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.750322  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | skip adding static IP to network mk-old-k8s-version-402923 - found existing host DHCP lease matching {name: "old-k8s-version-402923", mac: "52:54:00:0d:b3:2b", ip: "192.168.39.107"}
	I0316 00:17:11.750340  124077 main.go:141] libmachine: (old-k8s-version-402923) Reserved static IP address: 192.168.39.107
	I0316 00:17:11.750375  124077 main.go:141] libmachine: (old-k8s-version-402923) Waiting for SSH to be available...
	I0316 00:17:11.750416  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Getting to WaitForSSH function...
	I0316 00:17:11.752642  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753071  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.753100  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.753199  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH client type: external
	I0316 00:17:11.753242  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa (-rw-------)
	I0316 00:17:11.753275  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.107 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:11.753291  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | About to run SSH command:
	I0316 00:17:11.753305  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | exit 0
	I0316 00:17:11.876128  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:11.876541  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetConfigRaw
	I0316 00:17:11.877244  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:11.880520  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.880949  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.880974  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.881301  124077 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/config.json ...
	I0316 00:17:11.881493  124077 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:11.881513  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:11.881732  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.884046  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884427  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.884460  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.884615  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.884784  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.884923  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.885063  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.885269  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.885524  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.885541  124077 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:11.991853  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:11.991887  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992138  124077 buildroot.go:166] provisioning hostname "old-k8s-version-402923"
	I0316 00:17:11.992171  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:11.992394  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:11.994983  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995433  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:11.995457  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:11.995640  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:11.995847  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996110  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:11.996275  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:11.996459  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:11.996624  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:11.996637  124077 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-402923 && echo "old-k8s-version-402923" | sudo tee /etc/hostname
	I0316 00:17:12.113574  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-402923
	
	I0316 00:17:12.113608  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.116753  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117230  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.117266  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.117462  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.117678  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117816  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.117956  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.118143  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.118318  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.118335  124077 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-402923' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-402923/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-402923' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:12.230058  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:12.230092  124077 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:12.230111  124077 buildroot.go:174] setting up certificates
	I0316 00:17:12.230123  124077 provision.go:84] configureAuth start
	I0316 00:17:12.230138  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetMachineName
	I0316 00:17:12.230461  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:12.233229  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233705  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.233732  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.233849  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.236118  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236467  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.236499  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.236661  124077 provision.go:143] copyHostCerts
	I0316 00:17:12.236744  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:12.236759  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:12.236824  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:12.236942  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:12.236954  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:12.236987  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:12.237075  124077 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:12.237085  124077 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:12.237113  124077 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:12.237180  124077 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-402923 san=[127.0.0.1 192.168.39.107 localhost minikube old-k8s-version-402923]
	I0316 00:17:12.510410  124077 provision.go:177] copyRemoteCerts
	I0316 00:17:12.510502  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:12.510543  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.513431  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.513854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.513917  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.514129  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.514396  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.514576  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.514726  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:12.602632  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:12.630548  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0316 00:17:12.658198  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:12.686443  124077 provision.go:87] duration metric: took 456.304686ms to configureAuth
	I0316 00:17:12.686478  124077 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:12.686653  124077 config.go:182] Loaded profile config "old-k8s-version-402923": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:17:12.686725  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.689494  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.689854  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.689889  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.690016  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.690214  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690415  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.690555  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.690690  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:12.690860  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:12.690877  124077 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:12.956570  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:12.956598  124077 machine.go:97] duration metric: took 1.075091048s to provisionDockerMachine
	I0316 00:17:12.956609  124077 start.go:293] postStartSetup for "old-k8s-version-402923" (driver="kvm2")
	I0316 00:17:12.956620  124077 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:12.956635  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:12.956995  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:12.957045  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:12.959944  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960371  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:12.960407  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:12.960689  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:12.960926  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:12.961118  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:12.961276  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.043040  124077 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:13.048885  124077 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:13.048918  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:13.049002  124077 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:13.049098  124077 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:13.049206  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:13.062856  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:13.089872  124077 start.go:296] duration metric: took 133.24467ms for postStartSetup
	I0316 00:17:13.089928  124077 fix.go:56] duration metric: took 19.829445669s for fixHost
	I0316 00:17:13.089985  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.093385  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093672  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.093711  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.093901  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.094159  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094318  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.094478  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.094727  124077 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:13.094960  124077 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.107 22 <nil> <nil>}
	I0316 00:17:13.094985  124077 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:13.204654  124077 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548233.182671842
	
	I0316 00:17:13.204681  124077 fix.go:216] guest clock: 1710548233.182671842
	I0316 00:17:13.204689  124077 fix.go:229] Guest: 2024-03-16 00:17:13.182671842 +0000 UTC Remote: 2024-03-16 00:17:13.089953771 +0000 UTC m=+247.980315605 (delta=92.718071ms)
	I0316 00:17:13.204711  124077 fix.go:200] guest clock delta is within tolerance: 92.718071ms
	I0316 00:17:13.204718  124077 start.go:83] releasing machines lock for "old-k8s-version-402923", held for 19.944277451s
	I0316 00:17:13.204750  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.205065  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:13.208013  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208349  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.208404  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.208506  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209191  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209417  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .DriverName
	I0316 00:17:13.209518  124077 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:13.209659  124077 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:13.209675  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.209699  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHHostname
	I0316 00:17:13.212623  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212837  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.212995  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213025  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213288  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213346  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:13.213445  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:13.213523  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213546  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHPort
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHKeyPath
	I0316 00:17:13.213764  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.213905  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetSSHUsername
	I0316 00:17:13.214088  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.214297  124077 sshutil.go:53] new ssh client: &{IP:192.168.39.107 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/old-k8s-version-402923/id_rsa Username:docker}
	I0316 00:17:13.294052  124077 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:13.317549  124077 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:13.470650  124077 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:13.477881  124077 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:13.478008  124077 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:13.494747  124077 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:13.494771  124077 start.go:494] detecting cgroup driver to use...
	I0316 00:17:13.494845  124077 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:13.511777  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:13.527076  124077 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:13.527140  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:13.542746  124077 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:13.558707  124077 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:13.686621  124077 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:13.832610  124077 docker.go:233] disabling docker service ...
	I0316 00:17:13.832695  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:13.848930  124077 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:13.864909  124077 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:14.039607  124077 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:14.185885  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:14.203988  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:14.224783  124077 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0316 00:17:14.224842  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.236072  124077 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:14.236148  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.246560  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.257779  124077 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:14.268768  124077 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:14.280112  124077 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:14.289737  124077 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:14.289832  124077 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:14.304315  124077 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:14.314460  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:14.450929  124077 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:14.614957  124077 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:14.615035  124077 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:14.620259  124077 start.go:562] Will wait 60s for crictl version
	I0316 00:17:14.620322  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:14.624336  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:14.674406  124077 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:14.674506  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.706213  124077 ssh_runner.go:195] Run: crio --version
	I0316 00:17:14.738104  124077 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0316 00:17:14.739455  124077 main.go:141] libmachine: (old-k8s-version-402923) Calling .GetIP
	I0316 00:17:14.742674  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743068  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:0d:b3:2b", ip: ""} in network mk-old-k8s-version-402923: {Iface:virbr1 ExpiryTime:2024-03-16 01:17:05 +0000 UTC Type:0 Mac:52:54:00:0d:b3:2b Iaid: IPaddr:192.168.39.107 Prefix:24 Hostname:old-k8s-version-402923 Clientid:01:52:54:00:0d:b3:2b}
	I0316 00:17:14.743098  124077 main.go:141] libmachine: (old-k8s-version-402923) DBG | domain old-k8s-version-402923 has defined IP address 192.168.39.107 and MAC address 52:54:00:0d:b3:2b in network mk-old-k8s-version-402923
	I0316 00:17:14.743374  124077 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:14.748046  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:14.761565  124077 kubeadm.go:877] updating cluster {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:14.761711  124077 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0316 00:17:14.761788  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:14.814334  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:14.814426  124077 ssh_runner.go:195] Run: which lz4
	I0316 00:17:14.819003  124077 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0316 00:17:14.824319  124077 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0316 00:17:14.824359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0316 00:17:11.540876  123819 addons.go:505] duration metric: took 1.87908534s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner]
	I0316 00:17:11.862772  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.866333  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:13.229181  123454 main.go:141] libmachine: (no-preload-238598) Calling .Start
	I0316 00:17:13.229409  123454 main.go:141] libmachine: (no-preload-238598) Ensuring networks are active...
	I0316 00:17:13.230257  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network default is active
	I0316 00:17:13.230618  123454 main.go:141] libmachine: (no-preload-238598) Ensuring network mk-no-preload-238598 is active
	I0316 00:17:13.231135  123454 main.go:141] libmachine: (no-preload-238598) Getting domain xml...
	I0316 00:17:13.232023  123454 main.go:141] libmachine: (no-preload-238598) Creating domain...
	I0316 00:17:14.513800  123454 main.go:141] libmachine: (no-preload-238598) Waiting to get IP...
	I0316 00:17:14.514838  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.515446  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.515520  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.515407  125029 retry.go:31] will retry after 275.965955ms: waiting for machine to come up
	I0316 00:17:14.793095  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:14.793594  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:14.793721  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:14.793667  125029 retry.go:31] will retry after 347.621979ms: waiting for machine to come up
	I0316 00:17:15.143230  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.143869  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.143909  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.143820  125029 retry.go:31] will retry after 301.441766ms: waiting for machine to come up
	I0316 00:17:15.446476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.446917  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.446964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.446865  125029 retry.go:31] will retry after 431.207345ms: waiting for machine to come up
	I0316 00:17:13.615911  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.616381  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:17.618352  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:16.731675  124077 crio.go:444] duration metric: took 1.912713892s to copy over tarball
	I0316 00:17:16.731786  124077 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0316 00:17:16.362143  123819 node_ready.go:53] node "default-k8s-diff-port-313436" has status "Ready":"False"
	I0316 00:17:16.866488  123819 node_ready.go:49] node "default-k8s-diff-port-313436" has status "Ready":"True"
	I0316 00:17:16.866522  123819 node_ready.go:38] duration metric: took 7.00923342s for node "default-k8s-diff-port-313436" to be "Ready" ...
	I0316 00:17:16.866535  123819 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:17:16.881909  123819 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897574  123819 pod_ready.go:92] pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:16.897617  123819 pod_ready.go:81] duration metric: took 15.618728ms for pod "coredns-5dd5756b68-w9fx2" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:16.897630  123819 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:18.910740  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:15.879693  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:15.880186  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:15.880222  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:15.880148  125029 retry.go:31] will retry after 747.650888ms: waiting for machine to come up
	I0316 00:17:16.629378  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:16.631312  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:16.631352  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:16.631193  125029 retry.go:31] will retry after 670.902171ms: waiting for machine to come up
	I0316 00:17:17.304282  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:17.304704  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:17.304751  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:17.304658  125029 retry.go:31] will retry after 1.160879196s: waiting for machine to come up
	I0316 00:17:18.466662  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:18.467103  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:18.467136  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:18.467049  125029 retry.go:31] will retry after 948.597188ms: waiting for machine to come up
	I0316 00:17:19.417144  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:19.417623  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:19.417657  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:19.417561  125029 retry.go:31] will retry after 1.263395738s: waiting for machine to come up
	I0316 00:17:20.289713  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.613643  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.183908  124077 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.452076527s)
	I0316 00:17:20.317589  124077 crio.go:451] duration metric: took 3.585867705s to extract the tarball
	I0316 00:17:20.317615  124077 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0316 00:17:20.363420  124077 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:20.399307  124077 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0316 00:17:20.399353  124077 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:20.399433  124077 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.399476  124077 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.399524  124077 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.399639  124077 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0316 00:17:20.399671  124077 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.399726  124077 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.399439  124077 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.399920  124077 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.401767  124077 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.401821  124077 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0316 00:17:20.401838  124077 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.401899  124077 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:20.401966  124077 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.401706  124077 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.401702  124077 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.532875  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.541483  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.543646  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.545760  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.547605  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.610163  124077 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0316 00:17:20.610214  124077 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.610262  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.633933  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0316 00:17:20.660684  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.700145  124077 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0316 00:17:20.700206  124077 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.700263  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720422  124077 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0316 00:17:20.720520  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0316 00:17:20.720528  124077 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0316 00:17:20.720615  124077 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0316 00:17:20.720638  124077 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0316 00:17:20.720641  124077 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.720679  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720682  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720468  124077 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0316 00:17:20.720763  124077 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.720804  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.720545  124077 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.720858  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.777665  124077 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0316 00:17:20.777715  124077 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.777763  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0316 00:17:20.777810  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0316 00:17:20.777818  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0316 00:17:20.777769  124077 ssh_runner.go:195] Run: which crictl
	I0316 00:17:20.791476  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0316 00:17:20.791491  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0316 00:17:20.791562  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0316 00:17:20.862067  124077 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0316 00:17:20.862129  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0316 00:17:20.938483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0316 00:17:20.939305  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0316 00:17:20.953390  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0316 00:17:20.953463  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0316 00:17:20.953483  124077 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0316 00:17:21.092542  124077 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:21.242527  124077 cache_images.go:92] duration metric: took 843.146562ms to LoadCachedImages
	W0316 00:17:21.242626  124077 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0: no such file or directory
	I0316 00:17:21.242643  124077 kubeadm.go:928] updating node { 192.168.39.107 8443 v1.20.0 crio true true} ...
	I0316 00:17:21.242788  124077 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-402923 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.107
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:21.242874  124077 ssh_runner.go:195] Run: crio config
	I0316 00:17:21.293323  124077 cni.go:84] Creating CNI manager for ""
	I0316 00:17:21.293353  124077 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:21.293365  124077 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:21.293389  124077 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.107 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-402923 NodeName:old-k8s-version-402923 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.107"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.107 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0316 00:17:21.293586  124077 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.107
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-402923"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.107
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.107"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:21.293680  124077 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0316 00:17:21.305106  124077 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:21.305180  124077 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:21.316071  124077 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0316 00:17:21.336948  124077 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0316 00:17:21.355937  124077 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0316 00:17:21.375593  124077 ssh_runner.go:195] Run: grep 192.168.39.107	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:21.379918  124077 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.107	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:21.394770  124077 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:21.531658  124077 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:21.563657  124077 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923 for IP: 192.168.39.107
	I0316 00:17:21.563688  124077 certs.go:194] generating shared ca certs ...
	I0316 00:17:21.563709  124077 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:21.563878  124077 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:21.563944  124077 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:21.563958  124077 certs.go:256] generating profile certs ...
	I0316 00:17:21.564094  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.key
	I0316 00:17:21.564165  124077 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key.467cf8c5
	I0316 00:17:21.564216  124077 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key
	I0316 00:17:21.564354  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:21.564394  124077 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:21.564404  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:21.564441  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:21.564475  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:21.564516  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:21.564578  124077 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:21.565469  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:21.612500  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:21.651970  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:21.682386  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:21.715359  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0316 00:17:21.756598  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0316 00:17:21.799234  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:21.835309  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0316 00:17:21.870877  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:21.900922  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:21.929555  124077 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:21.958817  124077 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:21.979750  124077 ssh_runner.go:195] Run: openssl version
	I0316 00:17:21.987997  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:22.001820  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006864  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.006954  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:22.012983  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:22.024812  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:22.037905  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.042914  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.043007  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:22.049063  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:22.061418  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:22.074221  124077 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079325  124077 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.079411  124077 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:22.085833  124077 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:22.099816  124077 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:22.105310  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:22.112332  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:22.121017  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:22.128549  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:22.135442  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:22.142222  124077 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:22.149568  124077 kubeadm.go:391] StartCluster: {Name:old-k8s-version-402923 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-402923 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.107 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:22.149665  124077 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:22.149727  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.198873  124077 cri.go:89] found id: ""
	I0316 00:17:22.198953  124077 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:22.210536  124077 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:22.210561  124077 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:22.210566  124077 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:22.210622  124077 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:22.222613  124077 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:22.224015  124077 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-402923" does not appear in /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:17:22.224727  124077 kubeconfig.go:62] /home/jenkins/minikube-integration/17991-75602/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-402923" cluster setting kubeconfig missing "old-k8s-version-402923" context setting]
	I0316 00:17:22.225693  124077 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:22.227479  124077 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:22.240938  124077 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.107
	I0316 00:17:22.240977  124077 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:22.240992  124077 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:22.241049  124077 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:22.293013  124077 cri.go:89] found id: ""
	I0316 00:17:22.293113  124077 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:22.319848  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:22.331932  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:22.331974  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:22.332020  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:22.343836  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:22.343913  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:22.355503  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:22.365769  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:22.365829  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:22.375963  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.386417  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:22.386471  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:22.396945  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:22.407816  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:22.407877  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:22.417910  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:22.428553  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:22.543077  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.261917  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.504217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.635360  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:23.720973  124077 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:23.721079  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.221226  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:24.721207  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:21.865146  123819 pod_ready.go:102] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:22.241535  123819 pod_ready.go:92] pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.241561  123819 pod_ready.go:81] duration metric: took 5.34392174s for pod "etcd-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.241573  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247469  123819 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.247501  123819 pod_ready.go:81] duration metric: took 5.919787ms for pod "kube-apiserver-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.247515  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756151  123819 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.756180  123819 pod_ready.go:81] duration metric: took 508.652978ms for pod "kube-controller-manager-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.756194  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762214  123819 pod_ready.go:92] pod "kube-proxy-btmmm" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.762254  123819 pod_ready.go:81] duration metric: took 6.041426ms for pod "kube-proxy-btmmm" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.762268  123819 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769644  123819 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace has status "Ready":"True"
	I0316 00:17:22.769668  123819 pod_ready.go:81] duration metric: took 7.391813ms for pod "kube-scheduler-default-k8s-diff-port-313436" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:22.769681  123819 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	I0316 00:17:24.780737  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:20.682443  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:20.798804  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:20.798840  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:20.682821  125029 retry.go:31] will retry after 1.834378571s: waiting for machine to come up
	I0316 00:17:22.518539  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:22.518997  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:22.519027  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:22.518945  125029 retry.go:31] will retry after 1.944866033s: waiting for machine to come up
	I0316 00:17:24.466332  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:24.466902  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:24.466930  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:24.466847  125029 retry.go:31] will retry after 3.4483736s: waiting for machine to come up
	I0316 00:17:24.615642  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.113920  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:25.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:25.722104  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.221395  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:26.721375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.221676  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.721383  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.221512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:28.721927  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.222159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:29.721924  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:27.278017  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:29.777128  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:27.919457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:27.919931  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:27.919964  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:27.919891  125029 retry.go:31] will retry after 3.122442649s: waiting for machine to come up
	I0316 00:17:29.613500  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.613674  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:30.221532  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:30.721246  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.222123  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:31.721991  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.221277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.721224  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.221252  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:33.721893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.221785  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:34.722078  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:32.276855  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:34.277228  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:31.044512  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:31.044939  123454 main.go:141] libmachine: (no-preload-238598) DBG | unable to find current IP address of domain no-preload-238598 in network mk-no-preload-238598
	I0316 00:17:31.044970  123454 main.go:141] libmachine: (no-preload-238598) DBG | I0316 00:17:31.044884  125029 retry.go:31] will retry after 4.529863895s: waiting for machine to come up
	I0316 00:17:34.112266  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:36.118023  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:35.576311  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.576834  123454 main.go:141] libmachine: (no-preload-238598) Found IP for machine: 192.168.50.137
	I0316 00:17:35.576858  123454 main.go:141] libmachine: (no-preload-238598) Reserving static IP address...
	I0316 00:17:35.576875  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has current primary IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.577312  123454 main.go:141] libmachine: (no-preload-238598) Reserved static IP address: 192.168.50.137
	I0316 00:17:35.577355  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.577365  123454 main.go:141] libmachine: (no-preload-238598) Waiting for SSH to be available...
	I0316 00:17:35.577404  123454 main.go:141] libmachine: (no-preload-238598) DBG | skip adding static IP to network mk-no-preload-238598 - found existing host DHCP lease matching {name: "no-preload-238598", mac: "52:54:00:67:85:15", ip: "192.168.50.137"}
	I0316 00:17:35.577419  123454 main.go:141] libmachine: (no-preload-238598) DBG | Getting to WaitForSSH function...
	I0316 00:17:35.579640  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580061  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.580108  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.580210  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH client type: external
	I0316 00:17:35.580269  123454 main.go:141] libmachine: (no-preload-238598) DBG | Using SSH private key: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa (-rw-------)
	I0316 00:17:35.580303  123454 main.go:141] libmachine: (no-preload-238598) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.137 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0316 00:17:35.580319  123454 main.go:141] libmachine: (no-preload-238598) DBG | About to run SSH command:
	I0316 00:17:35.580339  123454 main.go:141] libmachine: (no-preload-238598) DBG | exit 0
	I0316 00:17:35.711373  123454 main.go:141] libmachine: (no-preload-238598) DBG | SSH cmd err, output: <nil>: 
	I0316 00:17:35.711791  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetConfigRaw
	I0316 00:17:35.712598  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:35.715455  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.715929  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.715954  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.716326  123454 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/config.json ...
	I0316 00:17:35.716525  123454 machine.go:94] provisionDockerMachine start ...
	I0316 00:17:35.716551  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:35.716802  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.719298  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719612  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.719644  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.719780  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.720005  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720178  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.720315  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.720487  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.720666  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.720677  123454 main.go:141] libmachine: About to run SSH command:
	hostname
	I0316 00:17:35.835733  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0316 00:17:35.835760  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836004  123454 buildroot.go:166] provisioning hostname "no-preload-238598"
	I0316 00:17:35.836033  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:35.836240  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.839024  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839413  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.839445  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.839627  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.839811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.839977  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.840133  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.840279  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.840485  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.840504  123454 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-238598 && echo "no-preload-238598" | sudo tee /etc/hostname
	I0316 00:17:35.976590  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-238598
	
	I0316 00:17:35.976624  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:35.979354  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979689  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:35.979720  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:35.979879  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:35.980104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980267  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:35.980445  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:35.980602  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:35.980796  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:35.980815  123454 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-238598' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-238598/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-238598' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0316 00:17:36.106710  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0316 00:17:36.106750  123454 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17991-75602/.minikube CaCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17991-75602/.minikube}
	I0316 00:17:36.106774  123454 buildroot.go:174] setting up certificates
	I0316 00:17:36.106786  123454 provision.go:84] configureAuth start
	I0316 00:17:36.106800  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetMachineName
	I0316 00:17:36.107104  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.110050  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110431  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.110476  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.110592  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.113019  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113366  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.113391  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.113517  123454 provision.go:143] copyHostCerts
	I0316 00:17:36.113595  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem, removing ...
	I0316 00:17:36.113619  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem
	I0316 00:17:36.113699  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/ca.pem (1082 bytes)
	I0316 00:17:36.113898  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem, removing ...
	I0316 00:17:36.113911  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem
	I0316 00:17:36.113964  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/cert.pem (1123 bytes)
	I0316 00:17:36.114051  123454 exec_runner.go:144] found /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem, removing ...
	I0316 00:17:36.114063  123454 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem
	I0316 00:17:36.114089  123454 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17991-75602/.minikube/key.pem (1675 bytes)
	I0316 00:17:36.114155  123454 provision.go:117] generating server cert: /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem org=jenkins.no-preload-238598 san=[127.0.0.1 192.168.50.137 localhost minikube no-preload-238598]
	I0316 00:17:36.239622  123454 provision.go:177] copyRemoteCerts
	I0316 00:17:36.239706  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0316 00:17:36.239736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.242440  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.242806  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.242841  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.243086  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.243279  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.243482  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.243623  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.330601  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0316 00:17:36.359600  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0316 00:17:36.384258  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0316 00:17:36.409195  123454 provision.go:87] duration metric: took 302.39571ms to configureAuth
	I0316 00:17:36.409239  123454 buildroot.go:189] setting minikube options for container-runtime
	I0316 00:17:36.409440  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:17:36.409539  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.412280  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412618  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.412652  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.412811  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.413039  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413217  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.413366  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.413576  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.413803  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.413823  123454 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0316 00:17:36.703300  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0316 00:17:36.703365  123454 machine.go:97] duration metric: took 986.82471ms to provisionDockerMachine
	I0316 00:17:36.703418  123454 start.go:293] postStartSetup for "no-preload-238598" (driver="kvm2")
	I0316 00:17:36.703440  123454 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0316 00:17:36.703474  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.703838  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0316 00:17:36.703880  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.706655  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707019  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.707057  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.707237  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.707470  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.707626  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.707822  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.794605  123454 ssh_runner.go:195] Run: cat /etc/os-release
	I0316 00:17:36.799121  123454 info.go:137] Remote host: Buildroot 2023.02.9
	I0316 00:17:36.799151  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/addons for local assets ...
	I0316 00:17:36.799222  123454 filesync.go:126] Scanning /home/jenkins/minikube-integration/17991-75602/.minikube/files for local assets ...
	I0316 00:17:36.799298  123454 filesync.go:149] local asset: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem -> 828702.pem in /etc/ssl/certs
	I0316 00:17:36.799423  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0316 00:17:36.808805  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:36.834244  123454 start.go:296] duration metric: took 130.803052ms for postStartSetup
	I0316 00:17:36.834290  123454 fix.go:56] duration metric: took 23.629390369s for fixHost
	I0316 00:17:36.834318  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.837197  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837643  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.837684  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.837926  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.838155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838360  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.838533  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.838721  123454 main.go:141] libmachine: Using SSH client type: native
	I0316 00:17:36.838965  123454 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.137 22 <nil> <nil>}
	I0316 00:17:36.838982  123454 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0316 00:17:36.956309  123454 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710548256.900043121
	
	I0316 00:17:36.956352  123454 fix.go:216] guest clock: 1710548256.900043121
	I0316 00:17:36.956366  123454 fix.go:229] Guest: 2024-03-16 00:17:36.900043121 +0000 UTC Remote: 2024-03-16 00:17:36.83429667 +0000 UTC m=+356.318603082 (delta=65.746451ms)
	I0316 00:17:36.956398  123454 fix.go:200] guest clock delta is within tolerance: 65.746451ms
	I0316 00:17:36.956425  123454 start.go:83] releasing machines lock for "no-preload-238598", held for 23.751563248s
	I0316 00:17:36.956472  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.956736  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:36.960077  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960494  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.960524  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.960678  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961247  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961454  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:17:36.961522  123454 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0316 00:17:36.961588  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.961730  123454 ssh_runner.go:195] Run: cat /version.json
	I0316 00:17:36.961756  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:17:36.964457  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964801  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.964834  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.964905  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965155  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965346  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965374  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:36.965406  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:36.965518  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.965609  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:17:36.965681  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:36.965739  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:17:36.965866  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:17:36.966034  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:17:37.077559  123454 ssh_runner.go:195] Run: systemctl --version
	I0316 00:17:37.084485  123454 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0316 00:17:37.229503  123454 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0316 00:17:37.236783  123454 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0316 00:17:37.236862  123454 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0316 00:17:37.255248  123454 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0316 00:17:37.255275  123454 start.go:494] detecting cgroup driver to use...
	I0316 00:17:37.255377  123454 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0316 00:17:37.272795  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0316 00:17:37.289822  123454 docker.go:217] disabling cri-docker service (if available) ...
	I0316 00:17:37.289885  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0316 00:17:37.306082  123454 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0316 00:17:37.322766  123454 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0316 00:17:37.448135  123454 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0316 00:17:37.614316  123454 docker.go:233] disabling docker service ...
	I0316 00:17:37.614381  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0316 00:17:37.630091  123454 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0316 00:17:37.645025  123454 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0316 00:17:37.773009  123454 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0316 00:17:37.891459  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0316 00:17:37.906829  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0316 00:17:37.927910  123454 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0316 00:17:37.927982  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.939166  123454 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0316 00:17:37.939226  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.950487  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.961547  123454 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0316 00:17:37.972402  123454 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0316 00:17:37.983413  123454 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0316 00:17:37.993080  123454 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0316 00:17:37.993147  123454 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0316 00:17:38.007746  123454 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0316 00:17:38.017917  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:38.158718  123454 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0316 00:17:38.329423  123454 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0316 00:17:38.329520  123454 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0316 00:17:38.334518  123454 start.go:562] Will wait 60s for crictl version
	I0316 00:17:38.334570  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.338570  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0316 00:17:38.375688  123454 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0316 00:17:38.375779  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.408167  123454 ssh_runner.go:195] Run: crio --version
	I0316 00:17:38.444754  123454 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0316 00:17:35.221746  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:35.721487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.221146  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.721411  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.222212  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:37.721889  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.221474  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:38.721198  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.221209  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:39.721227  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:36.277480  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.281375  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:38.446078  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetIP
	I0316 00:17:38.448885  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449299  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:17:38.449329  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:17:38.449565  123454 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0316 00:17:38.453922  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:38.467515  123454 kubeadm.go:877] updating cluster {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0316 00:17:38.467646  123454 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0316 00:17:38.467690  123454 ssh_runner.go:195] Run: sudo crictl images --output json
	I0316 00:17:38.511057  123454 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0316 00:17:38.511093  123454 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0316 00:17:38.511189  123454 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.511221  123454 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0316 00:17:38.511240  123454 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.511253  123454 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.511305  123454 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.511335  123454 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.511338  123454 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.511188  123454 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.512934  123454 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.512949  123454 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.512953  123454 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.512996  123454 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0316 00:17:38.513014  123454 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.512940  123454 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.648129  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.650306  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.661334  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0316 00:17:38.666656  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.669280  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.684494  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.690813  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.760339  123454 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0316 00:17:38.760396  123454 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.760449  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.760545  123454 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0316 00:17:38.760585  123454 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.760641  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908463  123454 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0316 00:17:38.908491  123454 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0316 00:17:38.908515  123454 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:38.908525  123454 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908579  123454 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0316 00:17:38.908607  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0316 00:17:38.908615  123454 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.908585  123454 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0316 00:17:38.908562  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908638  123454 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.908739  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:38.908651  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0316 00:17:38.954587  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0316 00:17:38.954611  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.954699  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:38.961857  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0316 00:17:38.961878  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0316 00:17:38.961979  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:38.962005  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0316 00:17:38.962010  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0316 00:17:39.052859  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.052888  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0316 00:17:39.052907  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.052958  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.052976  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:39.053001  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0316 00:17:39.052963  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0316 00:17:39.053055  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:39.053060  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0316 00:17:39.053100  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:39.053156  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.053235  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:39.120914  123454 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:38.612614  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:40.221375  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.721527  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.221274  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:41.722024  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.221988  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:42.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.221159  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:43.721738  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.221842  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:44.721811  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:40.779012  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:43.278631  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:41.133735  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0: (2.080597621s)
	I0316 00:17:41.133778  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0316 00:17:41.133890  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (2.080807025s)
	I0316 00:17:41.133924  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0316 00:17:41.133942  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.08085981s)
	I0316 00:17:41.133972  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133978  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.080988823s)
	I0316 00:17:41.133993  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0316 00:17:41.133948  123454 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134011  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.080758975s)
	I0316 00:17:41.134031  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0316 00:17:41.134032  123454 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.01309054s)
	I0316 00:17:41.134060  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0316 00:17:41.134083  123454 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0316 00:17:41.134110  123454 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:41.134160  123454 ssh_runner.go:195] Run: which crictl
	I0316 00:17:43.198894  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.064808781s)
	I0316 00:17:43.198926  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0316 00:17:43.198952  123454 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.198951  123454 ssh_runner.go:235] Completed: which crictl: (2.064761171s)
	I0316 00:17:43.199004  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0316 00:17:43.199051  123454 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:17:43.112939  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.114446  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.613592  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:45.221886  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.721823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.221823  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:46.721181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.221232  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:47.721596  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.221379  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:48.721655  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.221981  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:49.722089  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:45.776235  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.777686  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.278307  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:47.110501  123454 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.911421102s)
	I0316 00:17:47.110567  123454 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0316 00:17:47.110695  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (3.911660704s)
	I0316 00:17:47.110728  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0316 00:17:47.110751  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:47.110703  123454 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:47.110802  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0316 00:17:49.585079  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.474253503s)
	I0316 00:17:49.585109  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0316 00:17:49.585130  123454 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (2.474308112s)
	I0316 00:17:49.585160  123454 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0316 00:17:49.585134  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.585220  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0316 00:17:49.613704  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.615227  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:50.222090  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:50.721817  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:51.722102  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.221885  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.222166  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:53.721394  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.221623  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:54.722016  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:52.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:54.780467  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:51.736360  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.151102687s)
	I0316 00:17:51.736402  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0316 00:17:51.736463  123454 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:51.736535  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0316 00:17:54.214591  123454 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477993231s)
	I0316 00:17:54.214629  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0316 00:17:54.214658  123454 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:54.214728  123454 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0316 00:17:55.171123  123454 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17991-75602/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0316 00:17:55.171204  123454 cache_images.go:123] Successfully loaded all cached images
	I0316 00:17:55.171213  123454 cache_images.go:92] duration metric: took 16.660103091s to LoadCachedImages
	I0316 00:17:55.171233  123454 kubeadm.go:928] updating node { 192.168.50.137 8443 v1.29.0-rc.2 crio true true} ...
	I0316 00:17:55.171506  123454 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-238598 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.137
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0316 00:17:55.171617  123454 ssh_runner.go:195] Run: crio config
	I0316 00:17:55.225056  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:17:55.225078  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:17:55.225089  123454 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0316 00:17:55.225110  123454 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.137 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-238598 NodeName:no-preload-238598 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.137"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.137 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0316 00:17:55.225278  123454 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.137
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-238598"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.137
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.137"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0316 00:17:55.225371  123454 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0316 00:17:55.237834  123454 binaries.go:44] Found k8s binaries, skipping transfer
	I0316 00:17:55.237896  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0316 00:17:55.248733  123454 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0316 00:17:55.266587  123454 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0316 00:17:55.285283  123454 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0316 00:17:55.303384  123454 ssh_runner.go:195] Run: grep 192.168.50.137	control-plane.minikube.internal$ /etc/hosts
	I0316 00:17:55.307384  123454 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.137	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0316 00:17:55.321079  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:17:55.453112  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:17:55.470573  123454 certs.go:68] Setting up /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598 for IP: 192.168.50.137
	I0316 00:17:55.470600  123454 certs.go:194] generating shared ca certs ...
	I0316 00:17:55.470623  123454 certs.go:226] acquiring lock for ca certs: {Name:mkcca74edf7bce7ac702ff9d2c53a73917773a33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:17:55.470808  123454 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key
	I0316 00:17:55.470868  123454 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key
	I0316 00:17:55.470906  123454 certs.go:256] generating profile certs ...
	I0316 00:17:55.471028  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.key
	I0316 00:17:55.471140  123454 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key.0f2ae39d
	I0316 00:17:55.471195  123454 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key
	I0316 00:17:55.471410  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem (1338 bytes)
	W0316 00:17:55.471463  123454 certs.go:480] ignoring /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870_empty.pem, impossibly tiny 0 bytes
	I0316 00:17:55.471483  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca-key.pem (1675 bytes)
	I0316 00:17:55.471515  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/ca.pem (1082 bytes)
	I0316 00:17:55.471542  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/cert.pem (1123 bytes)
	I0316 00:17:55.471568  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/certs/key.pem (1675 bytes)
	I0316 00:17:55.471612  123454 certs.go:484] found cert: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem (1708 bytes)
	I0316 00:17:55.472267  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0316 00:17:55.517524  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0316 00:17:54.115678  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:56.613196  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.221179  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:55.721169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.221887  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:56.721323  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.221863  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.721137  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.221258  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.721277  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.221937  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.721213  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:57.277553  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:59.277770  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:17:55.567992  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0316 00:17:55.601463  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0316 00:17:55.637956  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0316 00:17:55.670063  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0316 00:17:55.694990  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0316 00:17:55.718916  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0316 00:17:55.744124  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0316 00:17:55.770051  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/certs/82870.pem --> /usr/share/ca-certificates/82870.pem (1338 bytes)
	I0316 00:17:55.794846  123454 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/ssl/certs/828702.pem --> /usr/share/ca-certificates/828702.pem (1708 bytes)
	I0316 00:17:55.819060  123454 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0316 00:17:55.836991  123454 ssh_runner.go:195] Run: openssl version
	I0316 00:17:55.844665  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0316 00:17:55.857643  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862493  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 15 22:57 /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.862561  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0316 00:17:55.868430  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0316 00:17:55.880551  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/82870.pem && ln -fs /usr/share/ca-certificates/82870.pem /etc/ssl/certs/82870.pem"
	I0316 00:17:55.891953  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896627  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 15 23:06 /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.896687  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/82870.pem
	I0316 00:17:55.902539  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/82870.pem /etc/ssl/certs/51391683.0"
	I0316 00:17:55.915215  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/828702.pem && ln -fs /usr/share/ca-certificates/828702.pem /etc/ssl/certs/828702.pem"
	I0316 00:17:55.926699  123454 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931120  123454 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 15 23:06 /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.931172  123454 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/828702.pem
	I0316 00:17:55.936791  123454 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/828702.pem /etc/ssl/certs/3ec20f2e.0"
	I0316 00:17:55.948180  123454 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0316 00:17:55.953021  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0316 00:17:55.959107  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0316 00:17:55.965018  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0316 00:17:55.971159  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0316 00:17:55.977069  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0316 00:17:55.983062  123454 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0316 00:17:55.989119  123454 kubeadm.go:391] StartCluster: {Name:no-preload-238598 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-238598 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0316 00:17:55.989201  123454 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0316 00:17:55.989254  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.029128  123454 cri.go:89] found id: ""
	I0316 00:17:56.029209  123454 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0316 00:17:56.040502  123454 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0316 00:17:56.040525  123454 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0316 00:17:56.040531  123454 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0316 00:17:56.040577  123454 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0316 00:17:56.051843  123454 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0316 00:17:56.052995  123454 kubeconfig.go:125] found "no-preload-238598" server: "https://192.168.50.137:8443"
	I0316 00:17:56.055273  123454 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0316 00:17:56.066493  123454 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.137
	I0316 00:17:56.066547  123454 kubeadm.go:1154] stopping kube-system containers ...
	I0316 00:17:56.066564  123454 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0316 00:17:56.066641  123454 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0316 00:17:56.111015  123454 cri.go:89] found id: ""
	I0316 00:17:56.111110  123454 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0316 00:17:56.131392  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:17:56.142638  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:17:56.142665  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:17:56.142725  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:17:56.154318  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:17:56.154418  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:17:56.166011  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:17:56.176688  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:17:56.176752  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:17:56.187776  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.198216  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:17:56.198285  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:17:56.208661  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:17:56.218587  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:17:56.218655  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:17:56.230247  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:17:56.241302  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:56.361423  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.731067  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.369591288s)
	I0316 00:17:57.731101  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:57.952457  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.044540  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:17:58.179796  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:17:58.179894  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:58.680635  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.180617  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:17:59.205383  123454 api_server.go:72] duration metric: took 1.025590775s to wait for apiserver process to appear ...
	I0316 00:17:59.205411  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:17:59.205436  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:59.205935  123454 api_server.go:269] stopped: https://192.168.50.137:8443/healthz: Get "https://192.168.50.137:8443/healthz": dial tcp 192.168.50.137:8443: connect: connection refused
	I0316 00:17:59.706543  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:17:58.613340  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:00.618869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:01.914835  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.914865  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:01.914879  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:01.972138  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0316 00:18:01.972173  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0316 00:18:02.206540  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.219111  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.219165  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:02.705639  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:02.709820  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0316 00:18:02.709850  123454 api_server.go:103] status: https://192.168.50.137:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0316 00:18:03.206513  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:18:03.216320  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:18:03.224237  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:18:03.224263  123454 api_server.go:131] duration metric: took 4.018845389s to wait for apiserver health ...
	I0316 00:18:03.224272  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:18:03.224279  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:18:03.225951  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:18:00.221426  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:00.721865  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.222060  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.721522  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.221416  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:02.721512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.222086  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:03.721652  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.221178  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:04.721726  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:01.777309  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.777625  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:03.227382  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:18:03.245892  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:18:03.267423  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:18:03.281349  123454 system_pods.go:59] 8 kube-system pods found
	I0316 00:18:03.281387  123454 system_pods.go:61] "coredns-76f75df574-d2f6z" [3cd22981-0f83-4a60-9930-c103cfc2d2ee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:18:03.281397  123454 system_pods.go:61] "etcd-no-preload-238598" [d98fa5b6-ad24-4c90-98c8-9e5b8f1a3250] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0316 00:18:03.281408  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [e7d7a5a0-9a4f-4df2-aaf7-44c36e5bd313] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0316 00:18:03.281420  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [a198865e-0ed5-40b6-8b10-a4fccdefa059] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0316 00:18:03.281434  123454 system_pods.go:61] "kube-proxy-cjhzn" [6529873c-cb9d-42d8-991d-e450783b1707] Running
	I0316 00:18:03.281443  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [bfb373fb-ec78-4ef1-b92e-3a8af3f805a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0316 00:18:03.281457  123454 system_pods.go:61] "metrics-server-57f55c9bc5-hffvp" [4181fe7f-3e95-455b-a744-8f4dca7b870d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:18:03.281466  123454 system_pods.go:61] "storage-provisioner" [d568ae10-7b9c-4c98-8263-a09505227ac7] Running
	I0316 00:18:03.281485  123454 system_pods.go:74] duration metric: took 14.043103ms to wait for pod list to return data ...
	I0316 00:18:03.281501  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:18:03.284899  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:18:03.284923  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:18:03.284934  123454 node_conditions.go:105] duration metric: took 3.425812ms to run NodePressure ...
	I0316 00:18:03.284955  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0316 00:18:03.562930  123454 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568376  123454 kubeadm.go:733] kubelet initialised
	I0316 00:18:03.568402  123454 kubeadm.go:734] duration metric: took 5.44437ms waiting for restarted kubelet to initialise ...
	I0316 00:18:03.568412  123454 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:18:03.574420  123454 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:03.113622  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.613724  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:07.614087  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.221553  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:05.721901  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.221156  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.721183  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.221422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:07.721748  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.222065  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:08.721708  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.221870  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:09.721200  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:06.278238  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.776236  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:05.582284  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:08.081679  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.082343  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.113282  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.114515  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:10.221957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.721202  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.221285  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:11.721255  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.222074  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:12.721513  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.221642  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:13.721701  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.221605  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:14.721818  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:10.776835  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.777258  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.778115  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:12.582099  123454 pod_ready.go:102] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:13.082243  123454 pod_ready.go:92] pod "coredns-76f75df574-d2f6z" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:13.082263  123454 pod_ready.go:81] duration metric: took 9.507817974s for pod "coredns-76f75df574-d2f6z" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:13.082271  123454 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:15.088733  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:14.613599  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:16.614876  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:15.221195  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:15.721898  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.221269  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:16.722141  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.221185  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.722064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.221430  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:18.721591  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.222026  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:19.721210  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:17.280289  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.777434  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:17.089800  123454 pod_ready.go:102] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:19.092413  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.092441  123454 pod_ready.go:81] duration metric: took 6.010161958s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.092453  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.097972  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.097996  123454 pod_ready.go:81] duration metric: took 5.533097ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.098008  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102186  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.102204  123454 pod_ready.go:81] duration metric: took 4.187939ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.102213  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106692  123454 pod_ready.go:92] pod "kube-proxy-cjhzn" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.106712  123454 pod_ready.go:81] duration metric: took 4.492665ms for pod "kube-proxy-cjhzn" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.106720  123454 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111735  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:18:19.111754  123454 pod_ready.go:81] duration metric: took 5.027601ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.111764  123454 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	I0316 00:18:19.113278  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.114061  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:20.221458  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:20.721448  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.221297  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:21.722144  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.221819  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:22.721699  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.222135  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:23.721905  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:23.721996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:23.761810  124077 cri.go:89] found id: ""
	I0316 00:18:23.761844  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.761856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:23.761864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:23.761917  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:23.798178  124077 cri.go:89] found id: ""
	I0316 00:18:23.798208  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.798216  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:23.798222  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:23.798281  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:23.834863  124077 cri.go:89] found id: ""
	I0316 00:18:23.834896  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.834908  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:23.834916  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:23.834998  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:23.873957  124077 cri.go:89] found id: ""
	I0316 00:18:23.874013  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.874025  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:23.874047  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:23.874134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:23.911121  124077 cri.go:89] found id: ""
	I0316 00:18:23.911149  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.911161  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:23.911168  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:23.911232  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:23.948218  124077 cri.go:89] found id: ""
	I0316 00:18:23.948249  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.948261  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:23.948269  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:23.948336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:23.988020  124077 cri.go:89] found id: ""
	I0316 00:18:23.988052  124077 logs.go:276] 0 containers: []
	W0316 00:18:23.988063  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:23.988070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:23.988144  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:24.023779  124077 cri.go:89] found id: ""
	I0316 00:18:24.023810  124077 logs.go:276] 0 containers: []
	W0316 00:18:24.023818  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:24.023827  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:24.023840  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:24.062760  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:24.062789  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:24.118903  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:24.118949  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:24.134357  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:24.134394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:24.255823  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:24.255880  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:24.255902  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:22.276633  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:24.278807  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:21.119790  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.618664  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:23.115414  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.613572  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:26.823428  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:26.838801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:26.838889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:26.876263  124077 cri.go:89] found id: ""
	I0316 00:18:26.876311  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.876331  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:26.876339  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:26.876403  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:26.912696  124077 cri.go:89] found id: ""
	I0316 00:18:26.912727  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.912738  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:26.912745  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:26.912806  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:26.948621  124077 cri.go:89] found id: ""
	I0316 00:18:26.948651  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.948658  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:26.948668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:26.948756  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:26.989173  124077 cri.go:89] found id: ""
	I0316 00:18:26.989203  124077 logs.go:276] 0 containers: []
	W0316 00:18:26.989213  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:26.989221  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:26.989290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:27.029845  124077 cri.go:89] found id: ""
	I0316 00:18:27.029872  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.029880  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:27.029887  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:27.029936  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:27.067519  124077 cri.go:89] found id: ""
	I0316 00:18:27.067546  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.067554  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:27.067560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:27.067613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:27.111499  124077 cri.go:89] found id: ""
	I0316 00:18:27.111532  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.111544  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:27.111553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:27.111619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:27.151733  124077 cri.go:89] found id: ""
	I0316 00:18:27.151762  124077 logs.go:276] 0 containers: []
	W0316 00:18:27.151771  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:27.151801  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:27.151818  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:27.165408  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:27.165437  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:27.244287  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:27.244318  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:27.244332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:27.315091  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:27.315131  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:27.354148  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:27.354181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:29.910487  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:29.923866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:29.923990  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:29.963028  124077 cri.go:89] found id: ""
	I0316 00:18:29.963059  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.963070  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:29.963078  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:29.963142  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:29.998168  124077 cri.go:89] found id: ""
	I0316 00:18:29.998198  124077 logs.go:276] 0 containers: []
	W0316 00:18:29.998207  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:29.998213  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:29.998263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:30.034678  124077 cri.go:89] found id: ""
	I0316 00:18:30.034719  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.034728  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:30.034734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:30.034784  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:30.075262  124077 cri.go:89] found id: ""
	I0316 00:18:30.075297  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.075309  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:30.075330  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:30.075398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:30.113390  124077 cri.go:89] found id: ""
	I0316 00:18:30.113418  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.113427  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:30.113434  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:30.113512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:30.154381  124077 cri.go:89] found id: ""
	I0316 00:18:30.154413  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.154421  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:30.154427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:30.154490  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:26.778891  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:29.277585  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:25.619282  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.118484  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.121236  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:28.114043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.119153  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.614043  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:30.194921  124077 cri.go:89] found id: ""
	I0316 00:18:30.194956  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.194965  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:30.194970  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:30.195021  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:30.229440  124077 cri.go:89] found id: ""
	I0316 00:18:30.229485  124077 logs.go:276] 0 containers: []
	W0316 00:18:30.229506  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:30.229519  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:30.229547  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:30.283137  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:30.283168  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:30.298082  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:30.298113  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:30.372590  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:30.372613  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:30.372633  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:30.450941  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:30.450981  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:32.995307  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:33.009713  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:33.009781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:33.051599  124077 cri.go:89] found id: ""
	I0316 00:18:33.051648  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.051660  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:33.051668  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:33.051727  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:33.087967  124077 cri.go:89] found id: ""
	I0316 00:18:33.087997  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.088008  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:33.088016  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:33.088096  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:33.125188  124077 cri.go:89] found id: ""
	I0316 00:18:33.125218  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.125230  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:33.125236  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:33.125304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:33.164764  124077 cri.go:89] found id: ""
	I0316 00:18:33.164799  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.164812  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:33.164821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:33.164904  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:33.209320  124077 cri.go:89] found id: ""
	I0316 00:18:33.209349  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.209360  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:33.209369  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:33.209429  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:33.261130  124077 cri.go:89] found id: ""
	I0316 00:18:33.261163  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.261175  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:33.261183  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:33.261273  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:33.305204  124077 cri.go:89] found id: ""
	I0316 00:18:33.305231  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.305242  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:33.305249  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:33.305336  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:33.357157  124077 cri.go:89] found id: ""
	I0316 00:18:33.357192  124077 logs.go:276] 0 containers: []
	W0316 00:18:33.357205  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:33.357217  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:33.357235  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:33.409230  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:33.409264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:33.425965  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:33.425995  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:33.503343  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:33.503375  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:33.503393  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:33.581856  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:33.581896  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:31.778203  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.276424  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:32.618082  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.619339  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:34.614209  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.113521  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:36.128677  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:36.143801  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:36.143897  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:36.181689  124077 cri.go:89] found id: ""
	I0316 00:18:36.181721  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.181730  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:36.181737  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:36.181787  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:36.225092  124077 cri.go:89] found id: ""
	I0316 00:18:36.225126  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.225137  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:36.225144  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:36.225196  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:36.269362  124077 cri.go:89] found id: ""
	I0316 00:18:36.269393  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.269404  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:36.269412  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:36.269489  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:36.308475  124077 cri.go:89] found id: ""
	I0316 00:18:36.308501  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.308509  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:36.308515  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:36.308583  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:36.347259  124077 cri.go:89] found id: ""
	I0316 00:18:36.347286  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.347295  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:36.347301  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:36.347381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:36.385355  124077 cri.go:89] found id: ""
	I0316 00:18:36.385379  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.385386  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:36.385392  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:36.385442  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:36.422260  124077 cri.go:89] found id: ""
	I0316 00:18:36.422291  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.422302  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:36.422310  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:36.422362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:36.465206  124077 cri.go:89] found id: ""
	I0316 00:18:36.465235  124077 logs.go:276] 0 containers: []
	W0316 00:18:36.465246  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:36.465258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:36.465275  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:36.538479  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:36.538501  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:36.538516  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:36.628742  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:36.628805  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:36.670030  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:36.670066  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:36.722237  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:36.722270  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:39.238651  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:39.260882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:39.260967  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:39.310896  124077 cri.go:89] found id: ""
	I0316 00:18:39.310935  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.310949  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:39.310960  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:39.311034  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:39.349172  124077 cri.go:89] found id: ""
	I0316 00:18:39.349199  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.349208  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:39.349214  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:39.349276  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:39.393202  124077 cri.go:89] found id: ""
	I0316 00:18:39.393237  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.393247  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:39.393255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:39.393324  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:39.432124  124077 cri.go:89] found id: ""
	I0316 00:18:39.432158  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.432170  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:39.432179  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:39.432270  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:39.469454  124077 cri.go:89] found id: ""
	I0316 00:18:39.469486  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.469498  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:39.469506  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:39.469571  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:39.510039  124077 cri.go:89] found id: ""
	I0316 00:18:39.510068  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.510076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:39.510082  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:39.510151  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:39.546508  124077 cri.go:89] found id: ""
	I0316 00:18:39.546540  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.546548  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:39.546554  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:39.546608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:39.581806  124077 cri.go:89] found id: ""
	I0316 00:18:39.581838  124077 logs.go:276] 0 containers: []
	W0316 00:18:39.581848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:39.581860  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:39.581880  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:39.652957  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:39.652986  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:39.653005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:39.730622  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:39.730665  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:39.772776  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:39.772813  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:39.827314  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:39.827361  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:36.279218  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:38.779161  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:37.118552  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.619543  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:39.614042  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.113784  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.342174  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:42.356877  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:42.356971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:42.407211  124077 cri.go:89] found id: ""
	I0316 00:18:42.407241  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.407251  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:42.407258  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:42.407340  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:42.444315  124077 cri.go:89] found id: ""
	I0316 00:18:42.444348  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.444359  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:42.444366  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:42.444433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:42.485323  124077 cri.go:89] found id: ""
	I0316 00:18:42.485359  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.485370  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:42.485382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:42.485436  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:42.521898  124077 cri.go:89] found id: ""
	I0316 00:18:42.521937  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.521949  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:42.521960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:42.522026  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:42.558676  124077 cri.go:89] found id: ""
	I0316 00:18:42.558703  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.558711  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:42.558717  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:42.558766  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:42.594416  124077 cri.go:89] found id: ""
	I0316 00:18:42.594444  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.594452  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:42.594457  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:42.594519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:42.636553  124077 cri.go:89] found id: ""
	I0316 00:18:42.636579  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.636587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:42.636593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:42.636645  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:42.670321  124077 cri.go:89] found id: ""
	I0316 00:18:42.670356  124077 logs.go:276] 0 containers: []
	W0316 00:18:42.670370  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:42.670388  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:42.670407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:42.726706  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:42.726744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:42.742029  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:42.742065  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:42.817724  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:42.817748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:42.817763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:42.892710  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:42.892744  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:41.278664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:43.777450  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:42.119118  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.119473  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:44.614102  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:47.112496  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:45.436101  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:45.451036  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:45.451103  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:45.488465  124077 cri.go:89] found id: ""
	I0316 00:18:45.488517  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.488527  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:45.488533  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:45.488585  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:45.525070  124077 cri.go:89] found id: ""
	I0316 00:18:45.525098  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.525106  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:45.525111  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:45.525169  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:45.562478  124077 cri.go:89] found id: ""
	I0316 00:18:45.562510  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.562520  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:45.562526  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:45.562579  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:45.599297  124077 cri.go:89] found id: ""
	I0316 00:18:45.599332  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.599341  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:45.599348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:45.599407  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:45.637880  124077 cri.go:89] found id: ""
	I0316 00:18:45.637910  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.637920  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:45.637928  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:45.637988  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:45.676778  124077 cri.go:89] found id: ""
	I0316 00:18:45.676808  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.676815  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:45.676821  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:45.676875  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:45.718134  124077 cri.go:89] found id: ""
	I0316 00:18:45.718160  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.718171  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:45.718178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:45.718250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:45.758613  124077 cri.go:89] found id: ""
	I0316 00:18:45.758640  124077 logs.go:276] 0 containers: []
	W0316 00:18:45.758648  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:45.758658  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:45.758672  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:45.773682  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:45.773715  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:45.850751  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:45.850772  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:45.850786  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:45.934436  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:45.934487  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:45.975224  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:45.975269  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:48.528894  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:48.543615  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:48.543678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:48.581613  124077 cri.go:89] found id: ""
	I0316 00:18:48.581650  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.581663  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:48.581671  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:48.581746  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:48.617109  124077 cri.go:89] found id: ""
	I0316 00:18:48.617133  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.617143  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:48.617150  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:48.617210  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:48.654527  124077 cri.go:89] found id: ""
	I0316 00:18:48.654557  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.654568  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:48.654576  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:48.654641  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:48.695703  124077 cri.go:89] found id: ""
	I0316 00:18:48.695735  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.695746  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:48.695758  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:48.695823  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:48.733030  124077 cri.go:89] found id: ""
	I0316 00:18:48.733055  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.733065  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:48.733072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:48.733135  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:48.767645  124077 cri.go:89] found id: ""
	I0316 00:18:48.767671  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.767682  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:48.767690  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:48.767751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:48.803889  124077 cri.go:89] found id: ""
	I0316 00:18:48.803918  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.803929  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:48.803937  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:48.804013  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:48.839061  124077 cri.go:89] found id: ""
	I0316 00:18:48.839091  124077 logs.go:276] 0 containers: []
	W0316 00:18:48.839102  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:48.839115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:48.839139  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:48.853497  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:48.853528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:48.925156  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:48.925184  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:48.925202  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:49.012245  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:49.012290  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:49.059067  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:49.059097  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:46.277664  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.279095  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:46.619201  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:48.619302  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:49.113616  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.613449  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.614324  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:51.628370  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:51.628433  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:51.663988  124077 cri.go:89] found id: ""
	I0316 00:18:51.664014  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.664022  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:51.664028  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:51.664101  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:51.697651  124077 cri.go:89] found id: ""
	I0316 00:18:51.697730  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.697749  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:51.697761  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:51.697824  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:51.736859  124077 cri.go:89] found id: ""
	I0316 00:18:51.736888  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.736895  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:51.736901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:51.736953  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:51.775724  124077 cri.go:89] found id: ""
	I0316 00:18:51.775750  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.775757  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:51.775775  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:51.775830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:51.814940  124077 cri.go:89] found id: ""
	I0316 00:18:51.814982  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.814997  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:51.815007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:51.815074  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:51.856264  124077 cri.go:89] found id: ""
	I0316 00:18:51.856300  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.856311  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:51.856318  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:51.856383  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:51.893487  124077 cri.go:89] found id: ""
	I0316 00:18:51.893519  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.893530  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:51.893536  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:51.893606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:51.930607  124077 cri.go:89] found id: ""
	I0316 00:18:51.930633  124077 logs.go:276] 0 containers: []
	W0316 00:18:51.930640  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:51.930651  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:51.930669  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:51.982702  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:51.982753  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:51.997636  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:51.997664  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:52.073058  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:52.073084  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:52.073100  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:52.156693  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:52.156734  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:54.698766  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:54.713472  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:54.713545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:54.750966  124077 cri.go:89] found id: ""
	I0316 00:18:54.750996  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.751007  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:54.751015  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:54.751084  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:54.792100  124077 cri.go:89] found id: ""
	I0316 00:18:54.792123  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.792131  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:54.792137  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:54.792188  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:54.828019  124077 cri.go:89] found id: ""
	I0316 00:18:54.828044  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.828054  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:54.828060  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:54.828122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:54.867841  124077 cri.go:89] found id: ""
	I0316 00:18:54.867881  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.867896  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:54.867914  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:54.867980  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:54.907417  124077 cri.go:89] found id: ""
	I0316 00:18:54.907458  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.907469  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:54.907476  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:54.907545  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:54.945330  124077 cri.go:89] found id: ""
	I0316 00:18:54.945363  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.945375  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:54.945382  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:54.945445  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:54.981200  124077 cri.go:89] found id: ""
	I0316 00:18:54.981226  124077 logs.go:276] 0 containers: []
	W0316 00:18:54.981235  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:54.981242  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:54.981302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:55.016595  124077 cri.go:89] found id: ""
	I0316 00:18:55.016628  124077 logs.go:276] 0 containers: []
	W0316 00:18:55.016638  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:55.016651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:55.016668  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:55.056610  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:55.056642  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:55.113339  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:55.113375  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:55.129576  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:55.129622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:18:50.777409  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:52.779497  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.278072  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:51.119041  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:53.121052  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:54.113699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:56.613686  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	W0316 00:18:55.201536  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:55.201561  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:55.201577  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:57.782382  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:18:57.796780  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:18:57.796891  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:18:57.831701  124077 cri.go:89] found id: ""
	I0316 00:18:57.831733  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.831742  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:18:57.831748  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:18:57.831810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:18:57.869251  124077 cri.go:89] found id: ""
	I0316 00:18:57.869284  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.869295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:18:57.869302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:18:57.869367  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:18:57.904159  124077 cri.go:89] found id: ""
	I0316 00:18:57.904197  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.904208  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:18:57.904217  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:18:57.904291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:18:57.949290  124077 cri.go:89] found id: ""
	I0316 00:18:57.949323  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.949334  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:18:57.949343  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:18:57.949411  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:18:57.988004  124077 cri.go:89] found id: ""
	I0316 00:18:57.988033  124077 logs.go:276] 0 containers: []
	W0316 00:18:57.988043  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:18:57.988051  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:18:57.988124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:18:58.027486  124077 cri.go:89] found id: ""
	I0316 00:18:58.027525  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.027543  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:18:58.027552  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:18:58.027623  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:18:58.067051  124077 cri.go:89] found id: ""
	I0316 00:18:58.067078  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.067087  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:18:58.067093  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:18:58.067143  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:18:58.102292  124077 cri.go:89] found id: ""
	I0316 00:18:58.102324  124077 logs.go:276] 0 containers: []
	W0316 00:18:58.102335  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:18:58.102347  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:18:58.102370  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:18:58.167012  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:18:58.167050  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:18:58.182824  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:18:58.182895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:18:58.259760  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:18:58.259789  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:18:58.259809  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:18:58.335533  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:18:58.335574  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:18:57.778370  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.277696  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:55.618835  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.118984  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.119379  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:18:58.614207  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:01.113795  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:00.881601  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:00.895498  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:00.895562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:00.932491  124077 cri.go:89] found id: ""
	I0316 00:19:00.932517  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.932525  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:00.932531  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:00.932586  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:00.970923  124077 cri.go:89] found id: ""
	I0316 00:19:00.970955  124077 logs.go:276] 0 containers: []
	W0316 00:19:00.970966  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:00.970979  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:00.971055  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:01.012349  124077 cri.go:89] found id: ""
	I0316 00:19:01.012379  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.012388  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:01.012394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:01.012465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:01.050624  124077 cri.go:89] found id: ""
	I0316 00:19:01.050653  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.050664  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:01.050670  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:01.050733  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:01.088817  124077 cri.go:89] found id: ""
	I0316 00:19:01.088848  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.088859  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:01.088866  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:01.088985  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:01.127177  124077 cri.go:89] found id: ""
	I0316 00:19:01.127207  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.127217  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:01.127224  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:01.127277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:01.165632  124077 cri.go:89] found id: ""
	I0316 00:19:01.165662  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.165670  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:01.165677  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:01.165737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:01.201689  124077 cri.go:89] found id: ""
	I0316 00:19:01.201715  124077 logs.go:276] 0 containers: []
	W0316 00:19:01.201724  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:01.201735  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:01.201752  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:01.256115  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:01.256150  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:01.270738  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:01.270764  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:01.342129  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:01.342158  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:01.342175  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:01.421881  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:01.421919  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:03.970064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:03.986194  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:03.986277  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:04.026274  124077 cri.go:89] found id: ""
	I0316 00:19:04.026300  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.026308  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:04.026315  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:04.026376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:04.067787  124077 cri.go:89] found id: ""
	I0316 00:19:04.067811  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.067820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:04.067825  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:04.067905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:04.106803  124077 cri.go:89] found id: ""
	I0316 00:19:04.106838  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.106850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:04.106858  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:04.106927  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:04.150095  124077 cri.go:89] found id: ""
	I0316 00:19:04.150122  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.150133  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:04.150142  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:04.150207  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:04.185505  124077 cri.go:89] found id: ""
	I0316 00:19:04.185534  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.185552  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:04.185560  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:04.185622  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:04.224216  124077 cri.go:89] found id: ""
	I0316 00:19:04.224240  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.224249  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:04.224255  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:04.224309  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:04.265084  124077 cri.go:89] found id: ""
	I0316 00:19:04.265110  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.265118  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:04.265123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:04.265173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:04.304260  124077 cri.go:89] found id: ""
	I0316 00:19:04.304291  124077 logs.go:276] 0 containers: []
	W0316 00:19:04.304302  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:04.304313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:04.304329  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:04.318105  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:04.318147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:04.395544  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:04.395569  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:04.395589  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:04.474841  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:04.474879  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:04.516078  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:04.516108  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:02.281155  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.779663  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:02.618637  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:04.619492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:03.613777  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.114458  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:07.073788  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:07.089367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:07.089517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:07.130763  124077 cri.go:89] found id: ""
	I0316 00:19:07.130785  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.130794  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:07.130802  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:07.130865  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:07.167062  124077 cri.go:89] found id: ""
	I0316 00:19:07.167087  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.167095  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:07.167100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:07.167158  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:07.206082  124077 cri.go:89] found id: ""
	I0316 00:19:07.206112  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.206121  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:07.206127  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:07.206184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:07.245240  124077 cri.go:89] found id: ""
	I0316 00:19:07.245268  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.245279  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:07.245287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:07.245355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:07.294555  124077 cri.go:89] found id: ""
	I0316 00:19:07.294584  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.294596  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:07.294604  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:07.294667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:07.344902  124077 cri.go:89] found id: ""
	I0316 00:19:07.344953  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.344964  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:07.344974  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:07.345043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:07.387913  124077 cri.go:89] found id: ""
	I0316 00:19:07.387949  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.387960  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:07.387969  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:07.388038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:07.423542  124077 cri.go:89] found id: ""
	I0316 00:19:07.423579  124077 logs.go:276] 0 containers: []
	W0316 00:19:07.423593  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:07.423607  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:07.423623  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:07.469022  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:07.469057  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:07.520348  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:07.520382  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:07.533536  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:07.533562  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:07.610109  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:07.610130  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:07.610146  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:07.276601  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.277239  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:06.619784  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:09.118699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:08.613361  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.615062  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:10.186616  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:10.201406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:10.201472  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:10.237519  124077 cri.go:89] found id: ""
	I0316 00:19:10.237546  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.237554  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:10.237560  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:10.237630  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:10.274432  124077 cri.go:89] found id: ""
	I0316 00:19:10.274462  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.274471  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:10.274480  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:10.274558  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:10.313321  124077 cri.go:89] found id: ""
	I0316 00:19:10.313356  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.313367  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:10.313376  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:10.313441  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:10.353675  124077 cri.go:89] found id: ""
	I0316 00:19:10.353702  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.353710  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:10.353716  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:10.353781  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:10.390437  124077 cri.go:89] found id: ""
	I0316 00:19:10.390466  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.390474  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:10.390480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:10.390530  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:10.429831  124077 cri.go:89] found id: ""
	I0316 00:19:10.429870  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.429882  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:10.429911  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:10.429984  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:10.472775  124077 cri.go:89] found id: ""
	I0316 00:19:10.472804  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.472812  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:10.472817  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:10.472878  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:10.509229  124077 cri.go:89] found id: ""
	I0316 00:19:10.509265  124077 logs.go:276] 0 containers: []
	W0316 00:19:10.509284  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:10.509298  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:10.509318  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:10.561199  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:10.561233  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:10.576358  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:10.576386  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:10.652784  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:10.652809  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:10.652826  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:10.727382  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:10.727420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.273154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:13.287778  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:13.287853  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:13.330520  124077 cri.go:89] found id: ""
	I0316 00:19:13.330556  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.330567  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:13.330576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:13.330654  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:13.372138  124077 cri.go:89] found id: ""
	I0316 00:19:13.372174  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.372186  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:13.372193  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:13.372255  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:13.408719  124077 cri.go:89] found id: ""
	I0316 00:19:13.408757  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.408768  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:13.408777  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:13.408837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:13.449275  124077 cri.go:89] found id: ""
	I0316 00:19:13.449308  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.449320  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:13.449328  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:13.449389  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:13.490271  124077 cri.go:89] found id: ""
	I0316 00:19:13.490298  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.490306  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:13.490312  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:13.490362  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:13.535199  124077 cri.go:89] found id: ""
	I0316 00:19:13.535227  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.535239  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:13.535247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:13.535304  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:13.581874  124077 cri.go:89] found id: ""
	I0316 00:19:13.581903  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.581914  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:13.581923  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:13.582000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:13.624625  124077 cri.go:89] found id: ""
	I0316 00:19:13.624655  124077 logs.go:276] 0 containers: []
	W0316 00:19:13.624665  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:13.624675  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:13.624687  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:13.639960  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:13.640026  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:13.724084  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:13.724105  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:13.724147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:13.816350  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:13.816390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:13.857990  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:13.858019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:11.277319  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.777280  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:11.119614  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.618997  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:13.113490  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:15.613530  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:17.613578  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.410118  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:16.423569  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:16.423627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:16.461819  124077 cri.go:89] found id: ""
	I0316 00:19:16.461850  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.461860  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:16.461867  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:16.461921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:16.497293  124077 cri.go:89] found id: ""
	I0316 00:19:16.497321  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.497329  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:16.497335  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:16.497398  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:16.533068  124077 cri.go:89] found id: ""
	I0316 00:19:16.533094  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.533102  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:16.533108  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:16.533156  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:16.571999  124077 cri.go:89] found id: ""
	I0316 00:19:16.572040  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.572051  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:16.572059  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:16.572118  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:16.607087  124077 cri.go:89] found id: ""
	I0316 00:19:16.607119  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.607130  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:16.607137  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:16.607202  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:16.645858  124077 cri.go:89] found id: ""
	I0316 00:19:16.645882  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.645890  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:16.645896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:16.645946  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:16.682638  124077 cri.go:89] found id: ""
	I0316 00:19:16.682668  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.682678  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:16.682685  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:16.682748  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:16.721060  124077 cri.go:89] found id: ""
	I0316 00:19:16.721093  124077 logs.go:276] 0 containers: []
	W0316 00:19:16.721103  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:16.721113  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:16.721129  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:16.771425  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:16.771464  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.786600  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:16.786632  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:16.858444  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:16.858476  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:16.858502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:16.934479  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:16.934529  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:19.473574  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:19.492486  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:19.492556  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:19.539676  124077 cri.go:89] found id: ""
	I0316 00:19:19.539705  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.539713  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:19.539719  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:19.539774  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:19.576274  124077 cri.go:89] found id: ""
	I0316 00:19:19.576305  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.576316  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:19.576325  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:19.576379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:19.612765  124077 cri.go:89] found id: ""
	I0316 00:19:19.612795  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.612805  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:19.612813  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:19.612872  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:19.654284  124077 cri.go:89] found id: ""
	I0316 00:19:19.654310  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.654318  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:19.654324  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:19.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:19.691893  124077 cri.go:89] found id: ""
	I0316 00:19:19.691922  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.691929  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:19.691936  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:19.691999  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:19.729684  124077 cri.go:89] found id: ""
	I0316 00:19:19.729712  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.729720  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:19.729727  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:19.729776  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:19.765038  124077 cri.go:89] found id: ""
	I0316 00:19:19.765066  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.765074  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:19.765080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:19.765130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:19.804136  124077 cri.go:89] found id: ""
	I0316 00:19:19.804162  124077 logs.go:276] 0 containers: []
	W0316 00:19:19.804170  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:19.804179  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:19.804193  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:19.880118  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:19.880146  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:19.880163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:19.955906  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:19.955944  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:20.004054  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:20.004095  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:20.058358  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:20.058401  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:16.276204  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.277156  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:16.118717  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:18.618005  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:19.614161  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.112808  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.573495  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:22.587422  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:22.587496  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:22.625573  124077 cri.go:89] found id: ""
	I0316 00:19:22.625596  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.625606  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:22.625624  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:22.625689  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:22.663141  124077 cri.go:89] found id: ""
	I0316 00:19:22.663172  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.663183  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:22.663190  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:22.663257  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:22.701314  124077 cri.go:89] found id: ""
	I0316 00:19:22.701352  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.701371  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:22.701380  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:22.701461  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:22.737900  124077 cri.go:89] found id: ""
	I0316 00:19:22.737956  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.737968  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:22.737978  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:22.738036  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:22.772175  124077 cri.go:89] found id: ""
	I0316 00:19:22.772207  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.772217  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:22.772226  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:22.772287  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:22.808715  124077 cri.go:89] found id: ""
	I0316 00:19:22.808747  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.808758  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:22.808766  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:22.808830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:22.844953  124077 cri.go:89] found id: ""
	I0316 00:19:22.844984  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.844995  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:22.845003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:22.845059  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:22.882483  124077 cri.go:89] found id: ""
	I0316 00:19:22.882519  124077 logs.go:276] 0 containers: []
	W0316 00:19:22.882529  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:22.882560  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:22.882576  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:22.966316  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:22.966359  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:23.012825  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:23.012866  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:23.065242  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:23.065283  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:23.081272  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:23.081306  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:23.159615  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:20.777843  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.778609  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.780571  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:20.618505  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:22.619290  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.118778  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:24.113901  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:26.115541  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:25.660595  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:25.674765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:25.674839  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:25.713488  124077 cri.go:89] found id: ""
	I0316 00:19:25.713520  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.713531  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:25.713540  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:25.713603  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:25.748771  124077 cri.go:89] found id: ""
	I0316 00:19:25.748796  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.748803  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:25.748809  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:25.748855  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:25.790509  124077 cri.go:89] found id: ""
	I0316 00:19:25.790540  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.790550  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:25.790558  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:25.790616  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:25.833655  124077 cri.go:89] found id: ""
	I0316 00:19:25.833684  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.833692  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:25.833698  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:25.833761  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:25.869482  124077 cri.go:89] found id: ""
	I0316 00:19:25.869514  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.869526  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:25.869535  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:25.869595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:25.907263  124077 cri.go:89] found id: ""
	I0316 00:19:25.907308  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.907336  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:25.907364  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:25.907435  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:25.942851  124077 cri.go:89] found id: ""
	I0316 00:19:25.942889  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.942901  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:25.942909  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:25.942975  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:25.981363  124077 cri.go:89] found id: ""
	I0316 00:19:25.981389  124077 logs.go:276] 0 containers: []
	W0316 00:19:25.981396  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:25.981406  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:25.981418  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:26.025766  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:26.025801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:26.082924  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:26.082963  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:26.098131  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:26.098161  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:26.176629  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:26.176652  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:26.176666  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:28.757406  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:28.772737  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:28.772811  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:28.816943  124077 cri.go:89] found id: ""
	I0316 00:19:28.816973  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.816981  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:28.816987  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:28.817039  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:28.853877  124077 cri.go:89] found id: ""
	I0316 00:19:28.853909  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.853919  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:28.853926  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:28.853981  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:28.889440  124077 cri.go:89] found id: ""
	I0316 00:19:28.889467  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.889475  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:28.889480  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:28.889532  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:28.929198  124077 cri.go:89] found id: ""
	I0316 00:19:28.929221  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.929229  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:28.929235  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:28.929296  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:28.968719  124077 cri.go:89] found id: ""
	I0316 00:19:28.968746  124077 logs.go:276] 0 containers: []
	W0316 00:19:28.968754  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:28.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:28.968830  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:29.006750  124077 cri.go:89] found id: ""
	I0316 00:19:29.006781  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.006805  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:29.006822  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:29.006889  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:29.041954  124077 cri.go:89] found id: ""
	I0316 00:19:29.041986  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.041996  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:29.042003  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:29.042069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:29.082798  124077 cri.go:89] found id: ""
	I0316 00:19:29.082836  124077 logs.go:276] 0 containers: []
	W0316 00:19:29.082848  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:29.082861  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:29.082878  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:29.138761  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:29.138801  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:29.152977  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:29.153009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:29.229013  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:29.229042  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:29.229061  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:29.315131  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:29.315170  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:27.277159  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:29.277242  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:27.618996  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:30.118650  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:28.614101  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.114366  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:31.861512  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:31.875286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:31.875374  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:31.912968  124077 cri.go:89] found id: ""
	I0316 00:19:31.912997  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.913034  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:31.913042  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:31.913113  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:31.952603  124077 cri.go:89] found id: ""
	I0316 00:19:31.952633  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.952645  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:31.952653  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:31.952719  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:31.989804  124077 cri.go:89] found id: ""
	I0316 00:19:31.989838  124077 logs.go:276] 0 containers: []
	W0316 00:19:31.989849  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:31.989857  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:31.989921  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:32.033765  124077 cri.go:89] found id: ""
	I0316 00:19:32.033801  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.033809  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:32.033816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:32.033880  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:32.070964  124077 cri.go:89] found id: ""
	I0316 00:19:32.070999  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.071013  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:32.071022  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:32.071095  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:32.108651  124077 cri.go:89] found id: ""
	I0316 00:19:32.108681  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.108691  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:32.108699  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:32.108765  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:32.152021  124077 cri.go:89] found id: ""
	I0316 00:19:32.152047  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.152055  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:32.152061  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:32.152124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:32.189889  124077 cri.go:89] found id: ""
	I0316 00:19:32.189913  124077 logs.go:276] 0 containers: []
	W0316 00:19:32.189921  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:32.189930  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:32.189943  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:32.262182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:32.262207  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:32.262218  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:32.348214  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:32.348264  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:32.392798  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:32.392829  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:32.447451  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:32.447504  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:34.963540  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:34.978764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:34.978846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:35.021630  124077 cri.go:89] found id: ""
	I0316 00:19:35.021665  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.021675  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:35.021681  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:35.021750  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:35.059252  124077 cri.go:89] found id: ""
	I0316 00:19:35.059285  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.059295  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:35.059303  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:35.059380  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:35.099584  124077 cri.go:89] found id: ""
	I0316 00:19:35.099610  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.099619  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:35.099625  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:35.099679  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:35.140566  124077 cri.go:89] found id: ""
	I0316 00:19:35.140600  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.140611  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:35.140618  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:35.140678  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:31.776661  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.778372  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:32.125130  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:34.619153  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:33.114785  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.116692  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:37.613605  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:35.181888  124077 cri.go:89] found id: ""
	I0316 00:19:35.181928  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.181940  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:35.181948  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:35.182018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:35.218158  124077 cri.go:89] found id: ""
	I0316 00:19:35.218183  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.218192  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:35.218198  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:35.218260  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:35.255178  124077 cri.go:89] found id: ""
	I0316 00:19:35.255214  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.255225  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:35.255233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:35.255302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:35.293623  124077 cri.go:89] found id: ""
	I0316 00:19:35.293664  124077 logs.go:276] 0 containers: []
	W0316 00:19:35.293674  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:35.293686  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:35.293702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:35.349175  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:35.349217  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:35.363714  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:35.363750  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:35.436182  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:35.436212  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:35.436231  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:35.513000  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:35.513039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.061103  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:38.075891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:38.075971  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:38.116330  124077 cri.go:89] found id: ""
	I0316 00:19:38.116361  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.116369  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:38.116374  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:38.116431  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:38.152900  124077 cri.go:89] found id: ""
	I0316 00:19:38.152927  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.152936  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:38.152945  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:38.152996  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:38.188765  124077 cri.go:89] found id: ""
	I0316 00:19:38.188803  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.188814  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:38.188823  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:38.188914  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:38.229885  124077 cri.go:89] found id: ""
	I0316 00:19:38.229914  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.229923  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:38.229929  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:38.230009  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:38.271211  124077 cri.go:89] found id: ""
	I0316 00:19:38.271238  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.271249  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:38.271257  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:38.271341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:38.308344  124077 cri.go:89] found id: ""
	I0316 00:19:38.308395  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.308405  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:38.308411  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:38.308491  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:38.346355  124077 cri.go:89] found id: ""
	I0316 00:19:38.346386  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.346398  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:38.346406  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:38.346478  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:38.383743  124077 cri.go:89] found id: ""
	I0316 00:19:38.383779  124077 logs.go:276] 0 containers: []
	W0316 00:19:38.383788  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:38.383798  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:38.383812  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:38.398420  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:38.398449  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:38.472286  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:38.472312  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:38.472332  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:38.554722  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:38.554761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:38.598074  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:38.598107  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:36.276574  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.276784  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:36.619780  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:38.619966  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:39.614178  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.616246  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.152744  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:41.166734  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:41.166819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:41.208070  124077 cri.go:89] found id: ""
	I0316 00:19:41.208102  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.208113  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:41.208122  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:41.208184  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:41.244759  124077 cri.go:89] found id: ""
	I0316 00:19:41.244787  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.244794  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:41.244803  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:41.244856  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:41.280954  124077 cri.go:89] found id: ""
	I0316 00:19:41.280981  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.280989  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:41.280995  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:41.281043  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:41.318041  124077 cri.go:89] found id: ""
	I0316 00:19:41.318074  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.318085  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:41.318098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:41.318163  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:41.356425  124077 cri.go:89] found id: ""
	I0316 00:19:41.356462  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.356473  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:41.356481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:41.356549  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:41.398216  124077 cri.go:89] found id: ""
	I0316 00:19:41.398242  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.398252  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:41.398261  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:41.398320  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:41.433743  124077 cri.go:89] found id: ""
	I0316 00:19:41.433773  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.433781  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:41.433787  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:41.433848  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:41.471907  124077 cri.go:89] found id: ""
	I0316 00:19:41.471963  124077 logs.go:276] 0 containers: []
	W0316 00:19:41.471978  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:41.471991  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:41.472009  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:41.525966  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:41.526005  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:41.541096  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:41.541132  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:41.608553  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:41.608577  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:41.608591  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:41.694620  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:41.694663  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.239169  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:44.252953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:44.253032  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:44.290724  124077 cri.go:89] found id: ""
	I0316 00:19:44.290760  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.290767  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:44.290774  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:44.290826  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:44.327086  124077 cri.go:89] found id: ""
	I0316 00:19:44.327121  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.327130  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:44.327136  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:44.327259  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:44.365264  124077 cri.go:89] found id: ""
	I0316 00:19:44.365292  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.365302  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:44.365309  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:44.365379  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:44.400690  124077 cri.go:89] found id: ""
	I0316 00:19:44.400716  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.400724  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:44.400730  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:44.400793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:44.436895  124077 cri.go:89] found id: ""
	I0316 00:19:44.436926  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.436938  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:44.436953  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:44.437022  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:44.472790  124077 cri.go:89] found id: ""
	I0316 00:19:44.472824  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.472832  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:44.472838  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:44.472901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:44.507399  124077 cri.go:89] found id: ""
	I0316 00:19:44.507428  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.507440  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:44.507454  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:44.507519  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:44.545780  124077 cri.go:89] found id: ""
	I0316 00:19:44.545817  124077 logs.go:276] 0 containers: []
	W0316 00:19:44.545828  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:44.545840  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:44.545858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:44.560424  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:44.560459  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:44.630978  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:44.630998  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:44.631013  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:44.716870  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:44.716908  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:44.756835  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:44.756864  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:40.779366  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.277656  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.279201  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:41.118560  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:43.120706  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:44.113022  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:46.114296  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.312424  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:47.325763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:47.325834  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:47.364426  124077 cri.go:89] found id: ""
	I0316 00:19:47.364460  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.364470  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:47.364476  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:47.364531  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:47.405718  124077 cri.go:89] found id: ""
	I0316 00:19:47.405748  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.405756  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:47.405762  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:47.405812  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:47.441331  124077 cri.go:89] found id: ""
	I0316 00:19:47.441359  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.441366  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:47.441371  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:47.441446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:47.474755  124077 cri.go:89] found id: ""
	I0316 00:19:47.474787  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.474798  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:47.474805  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:47.474867  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:47.511315  124077 cri.go:89] found id: ""
	I0316 00:19:47.511364  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.511376  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:47.511383  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:47.511468  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:47.562974  124077 cri.go:89] found id: ""
	I0316 00:19:47.563006  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.563014  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:47.563020  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:47.563077  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:47.597053  124077 cri.go:89] found id: ""
	I0316 00:19:47.597084  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.597096  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:47.597104  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:47.597174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:47.633712  124077 cri.go:89] found id: ""
	I0316 00:19:47.633744  124077 logs.go:276] 0 containers: []
	W0316 00:19:47.633754  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:47.633764  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:47.633779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:47.648463  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:47.648493  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:47.724363  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:47.724384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:47.724399  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:47.802532  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:47.802564  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:47.844185  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:47.844223  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:47.778494  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.277998  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:45.619070  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:47.622001  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.118739  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:48.114952  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.614794  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:50.396256  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:50.410802  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:50.410871  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:50.445437  124077 cri.go:89] found id: ""
	I0316 00:19:50.445472  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.445491  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:50.445499  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:50.445561  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:50.488098  124077 cri.go:89] found id: ""
	I0316 00:19:50.488134  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.488147  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:50.488154  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:50.488217  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:50.521834  124077 cri.go:89] found id: ""
	I0316 00:19:50.521874  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.521912  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:50.521924  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:50.522008  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:50.556600  124077 cri.go:89] found id: ""
	I0316 00:19:50.556627  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.556636  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:50.556641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:50.556703  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:50.590245  124077 cri.go:89] found id: ""
	I0316 00:19:50.590272  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.590280  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:50.590287  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:50.590347  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:50.629672  124077 cri.go:89] found id: ""
	I0316 00:19:50.629705  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.629717  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:50.629726  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:50.629793  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:50.675908  124077 cri.go:89] found id: ""
	I0316 00:19:50.675940  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.675949  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:50.675955  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:50.676014  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:50.715572  124077 cri.go:89] found id: ""
	I0316 00:19:50.715605  124077 logs.go:276] 0 containers: []
	W0316 00:19:50.715615  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:50.715627  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:50.715654  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:50.769665  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:50.769699  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:50.787735  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:50.787768  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:50.856419  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:50.856450  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:50.856466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:50.940719  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:50.940756  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:53.487005  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:53.500855  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:53.500933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:53.541721  124077 cri.go:89] found id: ""
	I0316 00:19:53.541754  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.541766  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:53.541778  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:53.541847  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:53.579387  124077 cri.go:89] found id: ""
	I0316 00:19:53.579421  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.579431  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:53.579439  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:53.579505  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:53.618230  124077 cri.go:89] found id: ""
	I0316 00:19:53.618258  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.618266  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:53.618272  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:53.618337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:53.657699  124077 cri.go:89] found id: ""
	I0316 00:19:53.657736  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.657747  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:53.657754  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:53.657818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:53.695243  124077 cri.go:89] found id: ""
	I0316 00:19:53.695273  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.695284  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:53.695292  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:53.695365  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:53.733657  124077 cri.go:89] found id: ""
	I0316 00:19:53.733690  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.733702  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:53.733711  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:53.733777  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:53.772230  124077 cri.go:89] found id: ""
	I0316 00:19:53.772259  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.772268  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:53.772276  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:53.772334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:53.809161  124077 cri.go:89] found id: ""
	I0316 00:19:53.809193  124077 logs.go:276] 0 containers: []
	W0316 00:19:53.809202  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:53.809211  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:53.809225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:53.859607  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:53.859647  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:53.874666  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:53.874702  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:53.951810  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:53.951841  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:53.951858  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:54.039391  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:54.039431  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:52.776113  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.777687  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:52.119145  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:54.619675  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:53.113139  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:55.113961  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.613751  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:56.587899  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:56.602407  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:56.602466  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:56.639588  124077 cri.go:89] found id: ""
	I0316 00:19:56.639614  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.639623  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:56.639629  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:56.639687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:56.679017  124077 cri.go:89] found id: ""
	I0316 00:19:56.679046  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.679058  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:56.679066  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:56.679136  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:56.714897  124077 cri.go:89] found id: ""
	I0316 00:19:56.714925  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.714933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:56.714941  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:56.715017  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:56.751313  124077 cri.go:89] found id: ""
	I0316 00:19:56.751349  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.751357  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:56.751363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:56.751413  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:56.786967  124077 cri.go:89] found id: ""
	I0316 00:19:56.786994  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.787001  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:56.787007  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:56.787069  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:56.826233  124077 cri.go:89] found id: ""
	I0316 00:19:56.826266  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.826277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:56.826286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:56.826344  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:56.860840  124077 cri.go:89] found id: ""
	I0316 00:19:56.860881  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.860893  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:56.860901  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:56.860960  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:19:56.901224  124077 cri.go:89] found id: ""
	I0316 00:19:56.901252  124077 logs.go:276] 0 containers: []
	W0316 00:19:56.901263  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:19:56.901275  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:19:56.901293  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:19:56.955002  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:19:56.955039  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:19:56.970583  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:19:56.970619  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:19:57.057799  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:19:57.057822  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:19:57.057838  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.138059  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:19:57.138101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:19:59.680008  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:19:59.700264  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:19:59.700346  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:19:59.756586  124077 cri.go:89] found id: ""
	I0316 00:19:59.756630  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.756644  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:19:59.756656  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:19:59.756731  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:19:59.805955  124077 cri.go:89] found id: ""
	I0316 00:19:59.805985  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.805997  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:19:59.806004  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:19:59.806076  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:19:59.843309  124077 cri.go:89] found id: ""
	I0316 00:19:59.843352  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.843361  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:19:59.843367  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:19:59.843418  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:19:59.879656  124077 cri.go:89] found id: ""
	I0316 00:19:59.879692  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.879705  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:19:59.879715  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:19:59.879788  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:19:59.917609  124077 cri.go:89] found id: ""
	I0316 00:19:59.917642  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.917652  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:19:59.917659  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:19:59.917725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:19:59.953915  124077 cri.go:89] found id: ""
	I0316 00:19:59.953949  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.953959  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:19:59.953968  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:19:59.954029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:19:59.991616  124077 cri.go:89] found id: ""
	I0316 00:19:59.991697  124077 logs.go:276] 0 containers: []
	W0316 00:19:59.991706  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:19:59.991714  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:19:59.991770  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:00.027976  124077 cri.go:89] found id: ""
	I0316 00:20:00.028008  124077 logs.go:276] 0 containers: []
	W0316 00:20:00.028019  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:00.028031  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:00.028051  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:00.103912  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:00.103958  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:00.103985  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:19:57.277412  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.277555  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:57.119685  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.618622  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:19:59.614914  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:02.113286  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:00.190763  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:00.190811  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:00.234428  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:00.234456  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:00.290431  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:00.290461  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:02.805044  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:02.819825  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:02.819902  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:02.853903  124077 cri.go:89] found id: ""
	I0316 00:20:02.853939  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.853948  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:02.853957  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:02.854025  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:02.887540  124077 cri.go:89] found id: ""
	I0316 00:20:02.887566  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.887576  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:02.887584  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:02.887646  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:02.921916  124077 cri.go:89] found id: ""
	I0316 00:20:02.921942  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.921950  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:02.921957  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:02.922018  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:02.957816  124077 cri.go:89] found id: ""
	I0316 00:20:02.957842  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.957850  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:02.957856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:02.957905  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:02.991892  124077 cri.go:89] found id: ""
	I0316 00:20:02.991943  124077 logs.go:276] 0 containers: []
	W0316 00:20:02.991954  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:02.991960  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:02.992020  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:03.030036  124077 cri.go:89] found id: ""
	I0316 00:20:03.030068  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.030078  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:03.030087  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:03.030155  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:03.067841  124077 cri.go:89] found id: ""
	I0316 00:20:03.067869  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.067888  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:03.067896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:03.067963  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:03.107661  124077 cri.go:89] found id: ""
	I0316 00:20:03.107694  124077 logs.go:276] 0 containers: []
	W0316 00:20:03.107706  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:03.107731  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:03.107758  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:03.152546  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:03.152579  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:03.209936  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:03.209974  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:03.223848  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:03.223873  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:03.298017  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:03.298040  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:03.298054  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:01.777542  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.278277  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:01.618756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.119973  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:04.113918  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.613434  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:05.884957  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:05.899052  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:05.899111  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:05.940588  124077 cri.go:89] found id: ""
	I0316 00:20:05.940624  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.940634  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:05.940640  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:05.940709  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:05.976552  124077 cri.go:89] found id: ""
	I0316 00:20:05.976597  124077 logs.go:276] 0 containers: []
	W0316 00:20:05.976612  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:05.976620  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:05.976690  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:06.014831  124077 cri.go:89] found id: ""
	I0316 00:20:06.014857  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.014864  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:06.014870  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:06.014952  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:06.050717  124077 cri.go:89] found id: ""
	I0316 00:20:06.050750  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.050759  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:06.050765  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:06.050819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:06.087585  124077 cri.go:89] found id: ""
	I0316 00:20:06.087618  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.087632  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:06.087640  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:06.087704  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:06.130591  124077 cri.go:89] found id: ""
	I0316 00:20:06.130615  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.130624  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:06.130630  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:06.130682  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:06.172022  124077 cri.go:89] found id: ""
	I0316 00:20:06.172053  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.172062  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:06.172068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:06.172130  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:06.214309  124077 cri.go:89] found id: ""
	I0316 00:20:06.214354  124077 logs.go:276] 0 containers: []
	W0316 00:20:06.214363  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:06.214372  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:06.214385  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:06.272134  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:06.272181  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:06.287080  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:06.287106  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:06.368011  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:06.368030  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:06.368044  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:06.447778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:06.447821  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:08.989311  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:09.003492  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:09.003554  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:09.042206  124077 cri.go:89] found id: ""
	I0316 00:20:09.042233  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.042242  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:09.042248  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:09.042298  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:09.085942  124077 cri.go:89] found id: ""
	I0316 00:20:09.085981  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.085992  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:09.086001  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:09.086072  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:09.128814  124077 cri.go:89] found id: ""
	I0316 00:20:09.128842  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.128850  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:09.128856  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:09.128916  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:09.169829  124077 cri.go:89] found id: ""
	I0316 00:20:09.169857  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.169866  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:09.169874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:09.169932  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:09.210023  124077 cri.go:89] found id: ""
	I0316 00:20:09.210051  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.210058  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:09.210068  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:09.210128  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:09.251308  124077 cri.go:89] found id: ""
	I0316 00:20:09.251356  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.251366  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:09.251372  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:09.251448  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:09.297560  124077 cri.go:89] found id: ""
	I0316 00:20:09.297590  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.297602  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:09.297611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:09.297672  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:09.337521  124077 cri.go:89] found id: ""
	I0316 00:20:09.337550  124077 logs.go:276] 0 containers: []
	W0316 00:20:09.337562  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:09.337574  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:09.337592  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:09.395370  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:09.395407  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:09.409451  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:09.409485  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:09.481301  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:09.481332  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:09.481350  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:09.561575  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:09.561615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:06.278976  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.778022  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:06.124642  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.618968  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:08.613517  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.613699  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.613997  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:12.103679  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:12.120189  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:12.120251  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:12.160911  124077 cri.go:89] found id: ""
	I0316 00:20:12.160945  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.160956  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:12.160964  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:12.161028  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:12.200600  124077 cri.go:89] found id: ""
	I0316 00:20:12.200632  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.200647  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:12.200655  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:12.200722  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:12.237414  124077 cri.go:89] found id: ""
	I0316 00:20:12.237458  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.237470  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:12.237478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:12.237543  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:12.274437  124077 cri.go:89] found id: ""
	I0316 00:20:12.274465  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.274472  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:12.274478  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:12.274541  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:12.319073  124077 cri.go:89] found id: ""
	I0316 00:20:12.319107  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.319115  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:12.319121  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:12.319185  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:12.355018  124077 cri.go:89] found id: ""
	I0316 00:20:12.355052  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.355062  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:12.355070  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:12.355134  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:12.391027  124077 cri.go:89] found id: ""
	I0316 00:20:12.391057  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.391066  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:12.391072  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:12.391124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:12.426697  124077 cri.go:89] found id: ""
	I0316 00:20:12.426729  124077 logs.go:276] 0 containers: []
	W0316 00:20:12.426737  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:12.426747  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:12.426761  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:12.476480  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:12.476520  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:12.491589  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:12.491622  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:12.563255  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:12.563286  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:12.563308  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:12.643219  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:12.643255  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:11.277492  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.777429  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:10.619721  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:13.120185  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.114540  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:17.614281  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.187850  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:15.202360  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:15.202444  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:15.238704  124077 cri.go:89] found id: ""
	I0316 00:20:15.238733  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.238746  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:15.238753  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:15.238819  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:15.277025  124077 cri.go:89] found id: ""
	I0316 00:20:15.277053  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.277063  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:15.277070  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:15.277133  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:15.315264  124077 cri.go:89] found id: ""
	I0316 00:20:15.315297  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.315308  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:15.315315  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:15.315395  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:15.354699  124077 cri.go:89] found id: ""
	I0316 00:20:15.354732  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.354743  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:15.354751  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:15.354818  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:15.393343  124077 cri.go:89] found id: ""
	I0316 00:20:15.393377  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.393387  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:15.393395  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:15.393464  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:15.432831  124077 cri.go:89] found id: ""
	I0316 00:20:15.432864  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.432875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:15.432884  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:15.432948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:15.468176  124077 cri.go:89] found id: ""
	I0316 00:20:15.468204  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.468215  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:15.468223  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:15.468290  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:15.502661  124077 cri.go:89] found id: ""
	I0316 00:20:15.502689  124077 logs.go:276] 0 containers: []
	W0316 00:20:15.502697  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:15.502705  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:15.502719  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:15.559357  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:15.559404  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:15.574936  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:15.574978  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:15.655720  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:15.655748  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:15.655765  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:15.738127  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:15.738163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:18.278617  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:18.293247  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:18.293322  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:18.332553  124077 cri.go:89] found id: ""
	I0316 00:20:18.332581  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.332589  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:18.332594  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:18.332659  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:18.370294  124077 cri.go:89] found id: ""
	I0316 00:20:18.370328  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.370336  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:18.370342  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:18.370397  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:18.406741  124077 cri.go:89] found id: ""
	I0316 00:20:18.406766  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.406774  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:18.406786  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:18.406842  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:18.441713  124077 cri.go:89] found id: ""
	I0316 00:20:18.441743  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.441754  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:18.441761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:18.441838  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:18.477817  124077 cri.go:89] found id: ""
	I0316 00:20:18.477847  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.477857  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:18.477865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:18.477929  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:18.514538  124077 cri.go:89] found id: ""
	I0316 00:20:18.514564  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.514575  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:18.514585  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:18.514652  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:18.553394  124077 cri.go:89] found id: ""
	I0316 00:20:18.553421  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.553430  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:18.553437  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:18.553512  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:18.590061  124077 cri.go:89] found id: ""
	I0316 00:20:18.590091  124077 logs.go:276] 0 containers: []
	W0316 00:20:18.590101  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:18.590111  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:18.590125  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:18.644491  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:18.644528  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:18.659744  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:18.659772  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:18.733671  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:18.733699  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:18.733714  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:18.821851  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:18.821912  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:15.781621  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.277078  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.277734  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:15.620224  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:18.118862  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.118920  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:20.117088  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.614917  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:21.362012  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:21.375963  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:21.376042  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:21.417997  124077 cri.go:89] found id: ""
	I0316 00:20:21.418025  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.418033  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:21.418039  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:21.418108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:21.457491  124077 cri.go:89] found id: ""
	I0316 00:20:21.457518  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.457526  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:21.457532  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:21.457595  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:21.495918  124077 cri.go:89] found id: ""
	I0316 00:20:21.496045  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.496071  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:21.496080  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:21.496149  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:21.533456  124077 cri.go:89] found id: ""
	I0316 00:20:21.533487  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.533499  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:21.533507  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:21.533647  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:21.570947  124077 cri.go:89] found id: ""
	I0316 00:20:21.570978  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.570988  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:21.570993  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:21.571070  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:21.608086  124077 cri.go:89] found id: ""
	I0316 00:20:21.608112  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.608156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:21.608167  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:21.608223  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:21.649545  124077 cri.go:89] found id: ""
	I0316 00:20:21.649577  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.649587  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:21.649593  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:21.649648  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:21.687487  124077 cri.go:89] found id: ""
	I0316 00:20:21.687519  124077 logs.go:276] 0 containers: []
	W0316 00:20:21.687530  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:21.687548  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:21.687572  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:21.742575  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:21.742615  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:21.757996  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:21.758033  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:21.829438  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:21.829469  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:21.829488  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:21.914984  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:21.915036  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:24.464154  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:24.478229  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:24.478310  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:24.513006  124077 cri.go:89] found id: ""
	I0316 00:20:24.513039  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.513050  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:24.513059  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:24.513121  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:24.552176  124077 cri.go:89] found id: ""
	I0316 00:20:24.552200  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.552210  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:24.552218  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:24.552283  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:24.584893  124077 cri.go:89] found id: ""
	I0316 00:20:24.584918  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.584926  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:24.584933  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:24.584983  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:24.620251  124077 cri.go:89] found id: ""
	I0316 00:20:24.620280  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.620288  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:24.620294  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:24.620341  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:24.655242  124077 cri.go:89] found id: ""
	I0316 00:20:24.655270  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.655282  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:24.655289  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:24.655376  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:24.691123  124077 cri.go:89] found id: ""
	I0316 00:20:24.691151  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.691159  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:24.691166  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:24.691227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:24.726574  124077 cri.go:89] found id: ""
	I0316 00:20:24.726606  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.726615  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:24.726621  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:24.726681  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:24.762695  124077 cri.go:89] found id: ""
	I0316 00:20:24.762729  124077 logs.go:276] 0 containers: []
	W0316 00:20:24.762739  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:24.762750  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:24.762767  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:24.818781  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:24.818816  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:24.834227  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:24.834260  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:24.902620  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:24.902653  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:24.902670  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:24.984221  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:24.984267  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:22.779251  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.276842  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:22.118990  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:24.119699  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:25.114563  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.614869  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:27.525241  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:27.540098  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:27.540171  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:27.579798  124077 cri.go:89] found id: ""
	I0316 00:20:27.579828  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.579837  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:27.579843  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:27.579896  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:27.623920  124077 cri.go:89] found id: ""
	I0316 00:20:27.623948  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.623958  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:27.623966  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:27.624029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:27.661148  124077 cri.go:89] found id: ""
	I0316 00:20:27.661180  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.661190  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:27.661197  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:27.661264  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:27.700856  124077 cri.go:89] found id: ""
	I0316 00:20:27.700881  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.700890  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:27.700896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:27.700944  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:27.736958  124077 cri.go:89] found id: ""
	I0316 00:20:27.736983  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.736992  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:27.736997  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:27.737047  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:27.794295  124077 cri.go:89] found id: ""
	I0316 00:20:27.794340  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.794351  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:27.794358  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:27.794424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:27.831329  124077 cri.go:89] found id: ""
	I0316 00:20:27.831368  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.831380  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:27.831389  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:27.831456  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:27.865762  124077 cri.go:89] found id: ""
	I0316 00:20:27.865787  124077 logs.go:276] 0 containers: []
	W0316 00:20:27.865798  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:27.865810  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:27.865828  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:27.917559  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:27.917598  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:27.932090  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:27.932130  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:28.009630  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:28.009751  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:28.009824  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:28.093417  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:28.093466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:27.277136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.777082  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:26.619354  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:28.619489  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:29.619807  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:32.117311  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.640765  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:30.654286  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:30.654372  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:30.690324  124077 cri.go:89] found id: ""
	I0316 00:20:30.690362  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.690374  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:30.690381  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:30.690457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:30.728051  124077 cri.go:89] found id: ""
	I0316 00:20:30.728086  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.728098  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:30.728106  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:30.728172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:30.764488  124077 cri.go:89] found id: ""
	I0316 00:20:30.764516  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.764528  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:30.764543  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:30.764608  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:30.807496  124077 cri.go:89] found id: ""
	I0316 00:20:30.807532  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.807546  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:30.807553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:30.807613  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:30.855653  124077 cri.go:89] found id: ""
	I0316 00:20:30.855689  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.855700  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:30.855708  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:30.855772  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:30.892270  124077 cri.go:89] found id: ""
	I0316 00:20:30.892301  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.892315  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:30.892322  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:30.892388  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:30.931422  124077 cri.go:89] found id: ""
	I0316 00:20:30.931453  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.931461  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:30.931467  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:30.931517  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:30.974563  124077 cri.go:89] found id: ""
	I0316 00:20:30.974592  124077 logs.go:276] 0 containers: []
	W0316 00:20:30.974601  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:30.974613  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:30.974630  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:31.027388  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:31.027423  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:31.041192  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:31.041225  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:31.106457  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:31.106479  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:31.106502  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:31.187288  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:31.187340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:33.732552  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:33.748045  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:33.748108  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:33.785037  124077 cri.go:89] found id: ""
	I0316 00:20:33.785067  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.785075  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:33.785082  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:33.785145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:33.822261  124077 cri.go:89] found id: ""
	I0316 00:20:33.822287  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.822294  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:33.822299  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:33.822360  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:33.858677  124077 cri.go:89] found id: ""
	I0316 00:20:33.858716  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.858727  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:33.858735  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:33.858799  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:33.895003  124077 cri.go:89] found id: ""
	I0316 00:20:33.895034  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.895046  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:33.895053  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:33.895122  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:33.931794  124077 cri.go:89] found id: ""
	I0316 00:20:33.931826  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.931837  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:33.931845  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:33.931909  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:33.968720  124077 cri.go:89] found id: ""
	I0316 00:20:33.968747  124077 logs.go:276] 0 containers: []
	W0316 00:20:33.968755  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:33.968761  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:33.968810  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:34.005631  124077 cri.go:89] found id: ""
	I0316 00:20:34.005656  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.005663  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:34.005668  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:34.005725  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:34.044383  124077 cri.go:89] found id: ""
	I0316 00:20:34.044412  124077 logs.go:276] 0 containers: []
	W0316 00:20:34.044423  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:34.044436  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:34.044453  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:34.101315  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:34.101355  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:34.116335  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:34.116362  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:34.216365  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:34.216399  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:34.216416  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:34.312368  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:34.312415  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:32.277582  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.778394  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:30.622010  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:33.119518  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:35.119736  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:34.613788  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.613878  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:36.851480  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:36.866891  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:36.866969  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:36.905951  124077 cri.go:89] found id: ""
	I0316 00:20:36.905991  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.906001  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:36.906010  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:36.906088  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:36.951245  124077 cri.go:89] found id: ""
	I0316 00:20:36.951275  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.951284  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:36.951290  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:36.951446  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:36.994002  124077 cri.go:89] found id: ""
	I0316 00:20:36.994036  124077 logs.go:276] 0 containers: []
	W0316 00:20:36.994048  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:36.994057  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:36.994124  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.034979  124077 cri.go:89] found id: ""
	I0316 00:20:37.035009  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.035020  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:37.035028  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:37.035099  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:37.078841  124077 cri.go:89] found id: ""
	I0316 00:20:37.078875  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.078888  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:37.078895  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:37.079068  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:37.126838  124077 cri.go:89] found id: ""
	I0316 00:20:37.126864  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.126874  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:37.126882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:37.126945  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:37.167933  124077 cri.go:89] found id: ""
	I0316 00:20:37.167961  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.167973  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:37.167980  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:37.168048  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:37.207709  124077 cri.go:89] found id: ""
	I0316 00:20:37.207746  124077 logs.go:276] 0 containers: []
	W0316 00:20:37.207758  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:37.207770  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:37.207783  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:37.263184  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:37.263220  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:37.278500  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:37.278531  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:37.359337  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:37.359361  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:37.359379  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:37.448692  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:37.448737  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:39.990370  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:40.006676  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:40.006780  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:40.056711  124077 cri.go:89] found id: ""
	I0316 00:20:40.056751  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.056762  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:40.056771  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:40.056837  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:40.102439  124077 cri.go:89] found id: ""
	I0316 00:20:40.102478  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.102491  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:40.102500  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:40.102578  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:40.143289  124077 cri.go:89] found id: ""
	I0316 00:20:40.143341  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.143353  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:40.143362  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:40.143437  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:37.277007  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.776793  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:37.121196  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:39.619239  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:38.616664  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:41.112900  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:40.190311  124077 cri.go:89] found id: ""
	I0316 00:20:40.190339  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.190353  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:40.190361  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:40.190426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:40.227313  124077 cri.go:89] found id: ""
	I0316 00:20:40.227381  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.227392  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:40.227398  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:40.227451  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:40.270552  124077 cri.go:89] found id: ""
	I0316 00:20:40.270584  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.270595  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:40.270603  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:40.270668  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:40.309786  124077 cri.go:89] found id: ""
	I0316 00:20:40.309814  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.309825  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:40.309836  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:40.309895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:40.344643  124077 cri.go:89] found id: ""
	I0316 00:20:40.344690  124077 logs.go:276] 0 containers: []
	W0316 00:20:40.344702  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:40.344714  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:40.344732  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:40.358016  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:40.358049  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:40.441350  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:40.441377  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:40.441394  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:40.516651  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:40.516690  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:40.558855  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:40.558887  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.111064  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:43.127599  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:43.127675  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:43.169159  124077 cri.go:89] found id: ""
	I0316 00:20:43.169189  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.169200  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:43.169207  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:43.169265  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:43.206353  124077 cri.go:89] found id: ""
	I0316 00:20:43.206385  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.206393  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:43.206399  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:43.206457  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:43.243152  124077 cri.go:89] found id: ""
	I0316 00:20:43.243184  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.243193  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:43.243199  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:43.243263  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:43.282871  124077 cri.go:89] found id: ""
	I0316 00:20:43.282903  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.282913  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:43.282920  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:43.282989  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:43.318561  124077 cri.go:89] found id: ""
	I0316 00:20:43.318591  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.318601  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:43.318611  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:43.318676  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:43.355762  124077 cri.go:89] found id: ""
	I0316 00:20:43.355797  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.355808  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:43.355816  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:43.355884  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:43.399425  124077 cri.go:89] found id: ""
	I0316 00:20:43.399460  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.399473  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:43.399481  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:43.399553  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:43.443103  124077 cri.go:89] found id: ""
	I0316 00:20:43.443142  124077 logs.go:276] 0 containers: []
	W0316 00:20:43.443166  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:43.443179  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:43.443196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:43.499111  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:43.499160  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:43.514299  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:43.514336  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:43.597592  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:43.597620  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:43.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:43.686243  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:43.686287  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:41.777952  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.276802  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:42.119128  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:44.119255  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:43.114941  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:45.614095  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:47.616615  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.232128  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:46.246233  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:46.246315  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:46.285818  124077 cri.go:89] found id: ""
	I0316 00:20:46.285848  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.285856  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:46.285864  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:46.285935  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:46.325256  124077 cri.go:89] found id: ""
	I0316 00:20:46.325285  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.325296  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:46.325302  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:46.325355  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:46.363235  124077 cri.go:89] found id: ""
	I0316 00:20:46.363277  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.363290  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:46.363298  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:46.363381  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:46.402482  124077 cri.go:89] found id: ""
	I0316 00:20:46.402523  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.402537  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:46.402546  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:46.402619  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:46.439464  124077 cri.go:89] found id: ""
	I0316 00:20:46.439498  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.439509  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:46.439517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:46.439581  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:46.476838  124077 cri.go:89] found id: ""
	I0316 00:20:46.476867  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.476875  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:46.476882  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:46.476930  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:46.513210  124077 cri.go:89] found id: ""
	I0316 00:20:46.513244  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.513256  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:46.513263  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:46.513337  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:46.550728  124077 cri.go:89] found id: ""
	I0316 00:20:46.550757  124077 logs.go:276] 0 containers: []
	W0316 00:20:46.550765  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:46.550780  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:46.550796  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:46.564258  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:46.564294  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:46.640955  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:46.640979  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:46.640997  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:46.720167  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:46.720207  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.765907  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:46.765952  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.321181  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:49.335347  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:49.335412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:49.376619  124077 cri.go:89] found id: ""
	I0316 00:20:49.376656  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.376667  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:49.376675  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:49.376738  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:49.418294  124077 cri.go:89] found id: ""
	I0316 00:20:49.418325  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.418337  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:49.418345  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:49.418412  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:49.456129  124077 cri.go:89] found id: ""
	I0316 00:20:49.456163  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.456174  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:49.456182  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:49.456250  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:49.496510  124077 cri.go:89] found id: ""
	I0316 00:20:49.496547  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.496559  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:49.496568  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:49.496637  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:49.537824  124077 cri.go:89] found id: ""
	I0316 00:20:49.537856  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.537866  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:49.537874  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:49.537948  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:49.581030  124077 cri.go:89] found id: ""
	I0316 00:20:49.581064  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.581076  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:49.581088  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:49.581173  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:49.619975  124077 cri.go:89] found id: ""
	I0316 00:20:49.620002  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.620011  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:49.620019  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:49.620078  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:49.659661  124077 cri.go:89] found id: ""
	I0316 00:20:49.659692  124077 logs.go:276] 0 containers: []
	W0316 00:20:49.659703  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:49.659714  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:49.659731  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:49.721760  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:49.721798  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:49.736556  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:49.736586  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:49.810529  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:49.810565  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:49.810580  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:49.891223  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:49.891272  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:46.277300  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.777275  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:46.119389  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:48.618309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.116327  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.614990  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.432023  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:52.446725  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:52.446801  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:52.483838  124077 cri.go:89] found id: ""
	I0316 00:20:52.483865  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.483874  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:52.483880  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:52.483965  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:52.520027  124077 cri.go:89] found id: ""
	I0316 00:20:52.520067  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.520080  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:52.520100  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:52.520174  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:52.557123  124077 cri.go:89] found id: ""
	I0316 00:20:52.557151  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.557162  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:52.557171  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:52.557238  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:52.592670  124077 cri.go:89] found id: ""
	I0316 00:20:52.592698  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.592706  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:52.592712  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:52.592762  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:52.631127  124077 cri.go:89] found id: ""
	I0316 00:20:52.631159  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.631170  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:52.631178  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:52.631240  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:52.669675  124077 cri.go:89] found id: ""
	I0316 00:20:52.669714  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.669724  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:52.669732  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:52.669796  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:52.706717  124077 cri.go:89] found id: ""
	I0316 00:20:52.706745  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.706755  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:52.706763  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:52.706827  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:52.749475  124077 cri.go:89] found id: ""
	I0316 00:20:52.749510  124077 logs.go:276] 0 containers: []
	W0316 00:20:52.749521  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:52.749533  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:52.749550  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:52.825420  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:52.825449  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:52.825466  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:52.906977  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:52.907019  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:52.954769  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:52.954806  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:53.009144  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:53.009196  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:50.777563  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:52.778761  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.276863  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:50.619469  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:53.119593  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.116184  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:57.613355  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.524893  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:55.538512  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:55.538596  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:55.577822  124077 cri.go:89] found id: ""
	I0316 00:20:55.577852  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.577863  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:55.577869  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:55.577938  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:55.619367  124077 cri.go:89] found id: ""
	I0316 00:20:55.619403  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.619416  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:55.619425  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:55.619498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:55.663045  124077 cri.go:89] found id: ""
	I0316 00:20:55.663086  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.663100  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:55.663110  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:55.663181  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:55.701965  124077 cri.go:89] found id: ""
	I0316 00:20:55.701995  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.702006  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:55.702012  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:55.702062  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:55.738558  124077 cri.go:89] found id: ""
	I0316 00:20:55.738588  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.738599  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:55.738606  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:55.738670  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:55.777116  124077 cri.go:89] found id: ""
	I0316 00:20:55.777145  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.777156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:55.777164  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:55.777227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:55.818329  124077 cri.go:89] found id: ""
	I0316 00:20:55.818359  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.818370  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:55.818386  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:55.818458  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:55.856043  124077 cri.go:89] found id: ""
	I0316 00:20:55.856080  124077 logs.go:276] 0 containers: []
	W0316 00:20:55.856091  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:55.856104  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:55.856121  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:55.911104  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:55.911147  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:55.926133  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:55.926163  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:56.008849  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:56.008872  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:56.008886  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:56.092695  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:56.092736  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:58.638164  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:20:58.652839  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:20:58.652901  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:20:58.688998  124077 cri.go:89] found id: ""
	I0316 00:20:58.689034  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.689045  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:20:58.689052  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:20:58.689117  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:20:58.725483  124077 cri.go:89] found id: ""
	I0316 00:20:58.725523  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.725543  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:20:58.725551  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:20:58.725629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:20:58.761082  124077 cri.go:89] found id: ""
	I0316 00:20:58.761117  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.761130  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:20:58.761139  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:20:58.761221  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:20:58.800217  124077 cri.go:89] found id: ""
	I0316 00:20:58.800253  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.800264  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:20:58.800271  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:20:58.800331  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:20:58.835843  124077 cri.go:89] found id: ""
	I0316 00:20:58.835878  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.835889  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:20:58.835896  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:20:58.835968  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:20:58.872238  124077 cri.go:89] found id: ""
	I0316 00:20:58.872269  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.872277  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:20:58.872284  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:20:58.872334  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:20:58.911668  124077 cri.go:89] found id: ""
	I0316 00:20:58.911703  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.911714  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:20:58.911723  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:20:58.911786  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:20:58.949350  124077 cri.go:89] found id: ""
	I0316 00:20:58.949383  124077 logs.go:276] 0 containers: []
	W0316 00:20:58.949393  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:20:58.949405  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:20:58.949429  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:20:59.008224  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:20:59.008262  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:20:59.023379  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:20:59.023420  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:20:59.102744  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:20:59.102779  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:20:59.102799  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:20:59.185635  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:20:59.185673  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:20:57.776955  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.276381  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:55.619683  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:58.122772  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:20:59.616518  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.115379  123537 pod_ready.go:102] pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.613248  123537 pod_ready.go:81] duration metric: took 4m0.006848891s for pod "metrics-server-57f55c9bc5-bfnwf" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:02.613273  123537 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:02.613280  123537 pod_ready.go:38] duration metric: took 4m5.267062496s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:02.613297  123537 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:02.613347  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:02.613393  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:02.670107  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:02.670139  123537 cri.go:89] found id: ""
	I0316 00:21:02.670149  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:02.670210  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.675144  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:02.675212  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:02.720695  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:02.720720  123537 cri.go:89] found id: ""
	I0316 00:21:02.720729  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:02.720790  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.725490  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:02.725570  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.728770  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:01.742641  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:01.742712  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:01.779389  124077 cri.go:89] found id: ""
	I0316 00:21:01.779419  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.779428  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:01.779436  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:01.779498  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:01.818403  124077 cri.go:89] found id: ""
	I0316 00:21:01.818439  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.818451  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:01.818459  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:01.818514  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:01.854879  124077 cri.go:89] found id: ""
	I0316 00:21:01.854911  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.854923  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:01.854931  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:01.855000  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:01.889627  124077 cri.go:89] found id: ""
	I0316 00:21:01.889661  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.889673  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:01.889681  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:01.889751  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:01.928372  124077 cri.go:89] found id: ""
	I0316 00:21:01.928408  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.928419  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:01.928427  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:01.928494  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:01.967615  124077 cri.go:89] found id: ""
	I0316 00:21:01.967645  124077 logs.go:276] 0 containers: []
	W0316 00:21:01.967655  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:01.967669  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:01.967726  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.006156  124077 cri.go:89] found id: ""
	I0316 00:21:02.006198  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.006212  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.006222  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:02.006291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:02.048403  124077 cri.go:89] found id: ""
	I0316 00:21:02.048435  124077 logs.go:276] 0 containers: []
	W0316 00:21:02.048447  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:02.048460  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:02.048536  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.100693  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:02.100733  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:02.117036  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:02.117073  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:02.198675  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:02.198702  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:02.198720  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:02.275769  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:02.275815  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:04.819150  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:04.835106  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:04.835172  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:04.878522  124077 cri.go:89] found id: ""
	I0316 00:21:04.878557  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.878568  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:04.878576  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:04.878629  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:04.914715  124077 cri.go:89] found id: ""
	I0316 00:21:04.914751  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.914762  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:04.914778  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:04.914843  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:04.953600  124077 cri.go:89] found id: ""
	I0316 00:21:04.953646  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.953657  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:04.953666  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:04.953737  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:04.990051  124077 cri.go:89] found id: ""
	I0316 00:21:04.990081  124077 logs.go:276] 0 containers: []
	W0316 00:21:04.990092  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:04.990099  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:04.990162  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:05.031604  124077 cri.go:89] found id: ""
	I0316 00:21:05.031631  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.031639  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:05.031645  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:05.031711  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:05.072114  124077 cri.go:89] found id: ""
	I0316 00:21:05.072145  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.072156  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:05.072162  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:05.072227  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:05.111559  124077 cri.go:89] found id: ""
	I0316 00:21:05.111589  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.111600  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:05.111608  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:05.111673  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:05.150787  124077 cri.go:89] found id: ""
	I0316 00:21:05.150823  124077 logs.go:276] 0 containers: []
	W0316 00:21:05.150833  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:05.150845  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:05.150871  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:02.276825  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.779811  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:00.617765  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.619210  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:04.619603  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:02.778908  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:02.778959  123537 cri.go:89] found id: ""
	I0316 00:21:02.778971  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:02.779028  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.784772  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:02.784864  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:02.830682  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:02.830709  123537 cri.go:89] found id: ""
	I0316 00:21:02.830719  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:02.830784  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.835733  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:02.835813  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:02.875862  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:02.875890  123537 cri.go:89] found id: ""
	I0316 00:21:02.875902  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:02.875967  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.880801  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:02.880857  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:02.921585  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:02.921611  123537 cri.go:89] found id: ""
	I0316 00:21:02.921622  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:02.921689  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:02.929521  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:02.929593  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:02.977621  123537 cri.go:89] found id: ""
	I0316 00:21:02.977646  123537 logs.go:276] 0 containers: []
	W0316 00:21:02.977657  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:02.977668  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:02.977723  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:03.020159  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.020186  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.020193  123537 cri.go:89] found id: ""
	I0316 00:21:03.020204  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:03.020274  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.025593  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:03.030718  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:03.030744  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:03.090141  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:03.090182  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:03.147416  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:03.147466  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:03.189686  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:03.189733  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:03.245980  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:03.246020  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:03.296494  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:03.296534  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:03.349602  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:03.349635  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:03.364783  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:03.364819  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:03.513917  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:03.513955  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:03.567916  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:03.567952  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:03.607620  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:03.607658  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:03.658683  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:03.658717  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:03.699797  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:03.699827  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:06.715440  123537 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:06.733725  123537 api_server.go:72] duration metric: took 4m16.598062692s to wait for apiserver process to appear ...
	I0316 00:21:06.733759  123537 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:06.733810  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:06.733868  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:06.775396  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:06.775431  123537 cri.go:89] found id: ""
	I0316 00:21:06.775442  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:06.775506  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.780448  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:06.780503  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:06.836927  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:06.836962  123537 cri.go:89] found id: ""
	I0316 00:21:06.836972  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:06.837025  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.841803  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:06.841869  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:06.887445  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:06.887470  123537 cri.go:89] found id: ""
	I0316 00:21:06.887479  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:06.887534  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.892112  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:06.892192  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:06.936614  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:06.936642  123537 cri.go:89] found id: ""
	I0316 00:21:06.936653  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:06.936717  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.943731  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:06.943799  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:06.986738  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:06.986764  123537 cri.go:89] found id: ""
	I0316 00:21:06.986774  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:06.986843  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:06.991555  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:06.991621  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:07.052047  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:07.052074  123537 cri.go:89] found id: ""
	I0316 00:21:07.052082  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:07.052133  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.057297  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:07.057358  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:07.104002  123537 cri.go:89] found id: ""
	I0316 00:21:07.104034  123537 logs.go:276] 0 containers: []
	W0316 00:21:07.104042  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:07.104049  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:07.104113  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:07.148540  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:07.148562  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:07.148566  123537 cri.go:89] found id: ""
	I0316 00:21:07.148572  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:07.148620  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.153502  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:07.157741  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:07.157770  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:07.197856  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:07.197889  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:07.654282  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:07.654324  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:07.708539  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:07.708579  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:07.725072  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:07.725104  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:05.203985  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:05.204025  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:05.218688  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:05.218724  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:05.300307  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:05.300331  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:05.300347  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:05.384017  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:05.384058  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.928300  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:07.943214  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:07.943299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:07.985924  124077 cri.go:89] found id: ""
	I0316 00:21:07.985959  124077 logs.go:276] 0 containers: []
	W0316 00:21:07.985970  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:07.985977  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:07.986037  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:08.024385  124077 cri.go:89] found id: ""
	I0316 00:21:08.024414  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.024423  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:08.024428  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:08.024504  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:08.064355  124077 cri.go:89] found id: ""
	I0316 00:21:08.064390  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.064402  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:08.064410  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:08.064482  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:08.104194  124077 cri.go:89] found id: ""
	I0316 00:21:08.104223  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.104232  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:08.104239  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:08.104302  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:08.144711  124077 cri.go:89] found id: ""
	I0316 00:21:08.144748  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.144761  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:08.144771  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:08.144840  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:08.183593  124077 cri.go:89] found id: ""
	I0316 00:21:08.183624  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.183633  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:08.183639  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:08.183688  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:08.226336  124077 cri.go:89] found id: ""
	I0316 00:21:08.226370  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.226383  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:08.226391  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:08.226481  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:08.267431  124077 cri.go:89] found id: ""
	I0316 00:21:08.267464  124077 logs.go:276] 0 containers: []
	W0316 00:21:08.267472  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:08.267482  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:08.267498  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:08.333035  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:08.333070  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:08.347313  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:08.347368  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:08.425510  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:08.425537  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:08.425558  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:08.514573  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:08.514626  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:07.277657  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.780721  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.121773  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:09.619756  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:07.862465  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:07.862498  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:07.925812  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:07.925846  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:07.986121  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:07.986152  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:08.036774  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:08.036817  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:08.091902  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:08.091933  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:08.142096  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:08.142128  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:08.210747  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:08.210789  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:08.270225  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:08.270259  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:10.817112  123537 api_server.go:253] Checking apiserver healthz at https://192.168.61.91:8443/healthz ...
	I0316 00:21:10.822359  123537 api_server.go:279] https://192.168.61.91:8443/healthz returned 200:
	ok
	I0316 00:21:10.823955  123537 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:10.823978  123537 api_server.go:131] duration metric: took 4.090210216s to wait for apiserver health ...
	I0316 00:21:10.823988  123537 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:10.824019  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:10.824076  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:10.872487  123537 cri.go:89] found id: "81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:10.872514  123537 cri.go:89] found id: ""
	I0316 00:21:10.872524  123537 logs.go:276] 1 containers: [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2]
	I0316 00:21:10.872590  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.877131  123537 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:10.877197  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:10.916699  123537 cri.go:89] found id: "229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:10.916728  123537 cri.go:89] found id: ""
	I0316 00:21:10.916737  123537 logs.go:276] 1 containers: [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613]
	I0316 00:21:10.916797  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.921114  123537 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:10.921182  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:10.964099  123537 cri.go:89] found id: "4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:10.964123  123537 cri.go:89] found id: ""
	I0316 00:21:10.964132  123537 logs.go:276] 1 containers: [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b]
	I0316 00:21:10.964191  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:10.968716  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:10.968788  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.008883  123537 cri.go:89] found id: "4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.008909  123537 cri.go:89] found id: ""
	I0316 00:21:11.008919  123537 logs.go:276] 1 containers: [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977]
	I0316 00:21:11.008974  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.014068  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.014138  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.067209  123537 cri.go:89] found id: "0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.067239  123537 cri.go:89] found id: ""
	I0316 00:21:11.067251  123537 logs.go:276] 1 containers: [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c]
	I0316 00:21:11.067315  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.072536  123537 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.072663  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.119366  123537 cri.go:89] found id: "9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.119399  123537 cri.go:89] found id: ""
	I0316 00:21:11.119411  123537 logs.go:276] 1 containers: [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c]
	I0316 00:21:11.119462  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.124502  123537 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.124590  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.169458  123537 cri.go:89] found id: ""
	I0316 00:21:11.169494  123537 logs.go:276] 0 containers: []
	W0316 00:21:11.169505  123537 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.169513  123537 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:11.169576  123537 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:11.218886  123537 cri.go:89] found id: "413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:11.218923  123537 cri.go:89] found id: "ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:11.218928  123537 cri.go:89] found id: ""
	I0316 00:21:11.218938  123537 logs.go:276] 2 containers: [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65]
	I0316 00:21:11.219002  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.223583  123537 ssh_runner.go:195] Run: which crictl
	I0316 00:21:11.228729  123537 logs.go:123] Gathering logs for kube-apiserver [81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2] ...
	I0316 00:21:11.228753  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81025ff5aef08a9d0806b4259b212b80d3ed7a2f696ceb5d369915590b2e18d2"
	I0316 00:21:11.282781  123537 logs.go:123] Gathering logs for etcd [229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613] ...
	I0316 00:21:11.282818  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 229fef1811744758a4cda649dfc6bb89a72e72fb729d97e3ea56f46ee88e6613"
	I0316 00:21:11.347330  123537 logs.go:123] Gathering logs for kube-scheduler [4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977] ...
	I0316 00:21:11.347379  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4909a6f121b0cc33ad87944b1df228c47a5e3f3e5f09e112728e58243cef8977"
	I0316 00:21:11.401191  123537 logs.go:123] Gathering logs for kube-proxy [0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c] ...
	I0316 00:21:11.401225  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 0947f6f374016eb5012cc58607b003de088f698ed1325fcc8cbbadf265c5999c"
	I0316 00:21:11.453126  123537 logs.go:123] Gathering logs for kube-controller-manager [9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c] ...
	I0316 00:21:11.453158  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9041a3c9211cc9df12679c86d590e470c01cbfad8c72ddaf5b0bc5016787a04c"
	I0316 00:21:11.523058  123537 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.523110  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.944108  123537 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.944157  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:12.001558  123537 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:12.001602  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:12.062833  123537 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:12.062885  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:12.078726  123537 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:12.078762  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:12.209248  123537 logs.go:123] Gathering logs for coredns [4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b] ...
	I0316 00:21:12.209284  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4e6f75410b4de92d8cc976cf8f0e335a7042a32fd12651d9f101a4798c94523b"
	I0316 00:21:12.251891  123537 logs.go:123] Gathering logs for storage-provisioner [413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08] ...
	I0316 00:21:12.251930  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 413fba3fe664b4b8d84eeaf89fe1fd6717f80106c1b24d68f3d7fa161bfb9b08"
	I0316 00:21:12.296240  123537 logs.go:123] Gathering logs for storage-provisioner [ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65] ...
	I0316 00:21:12.296271  123537 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ea3eb17a8a72d7607061ab29ae3cc6c56a36c2b4b995faae700ec7c22aafef65"
	I0316 00:21:14.846244  123537 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:14.846274  123537 system_pods.go:61] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.846279  123537 system_pods.go:61] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.846283  123537 system_pods.go:61] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.846287  123537 system_pods.go:61] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.846290  123537 system_pods.go:61] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.846294  123537 system_pods.go:61] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.846299  123537 system_pods.go:61] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.846302  123537 system_pods.go:61] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.846309  123537 system_pods.go:74] duration metric: took 4.022315588s to wait for pod list to return data ...
	I0316 00:21:14.846317  123537 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:14.848830  123537 default_sa.go:45] found service account: "default"
	I0316 00:21:14.848852  123537 default_sa.go:55] duration metric: took 2.529805ms for default service account to be created ...
	I0316 00:21:14.848859  123537 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:14.861369  123537 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:14.861396  123537 system_pods.go:89] "coredns-5dd5756b68-t8xb4" [e9feb9bc-2a4a-402b-9753-f2f84702db9c] Running
	I0316 00:21:14.861401  123537 system_pods.go:89] "etcd-embed-certs-666637" [24700e74-eb75-40aa-bf2d-69ca0eacad92] Running
	I0316 00:21:14.861405  123537 system_pods.go:89] "kube-apiserver-embed-certs-666637" [9440a6c1-9e28-4ddb-8ff3-0d0ec1b50770] Running
	I0316 00:21:14.861409  123537 system_pods.go:89] "kube-controller-manager-embed-certs-666637" [3229280d-3d84-4567-a134-6317d1c7a915] Running
	I0316 00:21:14.861448  123537 system_pods.go:89] "kube-proxy-8fpc5" [a0d4bdc4-4f17-4b6a-8958-cecd1884016e] Running
	I0316 00:21:14.861456  123537 system_pods.go:89] "kube-scheduler-embed-certs-666637" [78081d31-d398-46a4-8912-77f022675d3f] Running
	I0316 00:21:14.861465  123537 system_pods.go:89] "metrics-server-57f55c9bc5-bfnwf" [de35c1e5-3847-4a31-a31a-86aeed12252c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:14.861470  123537 system_pods.go:89] "storage-provisioner" [d503e849-8714-402d-aeef-26cd0f4aff39] Running
	I0316 00:21:14.861478  123537 system_pods.go:126] duration metric: took 12.614437ms to wait for k8s-apps to be running ...
	I0316 00:21:14.861488  123537 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:14.861534  123537 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:14.879439  123537 system_svc.go:56] duration metric: took 17.934537ms WaitForService to wait for kubelet
	I0316 00:21:14.879484  123537 kubeadm.go:576] duration metric: took 4m24.743827748s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:14.879523  123537 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:14.882642  123537 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:14.882673  123537 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:14.882716  123537 node_conditions.go:105] duration metric: took 3.184841ms to run NodePressure ...
	I0316 00:21:14.882733  123537 start.go:240] waiting for startup goroutines ...
	I0316 00:21:14.882749  123537 start.go:245] waiting for cluster config update ...
	I0316 00:21:14.882789  123537 start.go:254] writing updated cluster config ...
	I0316 00:21:14.883119  123537 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:14.937804  123537 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:14.939886  123537 out.go:177] * Done! kubectl is now configured to use "embed-certs-666637" cluster and "default" namespace by default
	I0316 00:21:11.058354  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:11.076319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:11.076421  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:11.116087  124077 cri.go:89] found id: ""
	I0316 00:21:11.116122  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.116133  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:11.116142  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:11.116209  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:11.161424  124077 cri.go:89] found id: ""
	I0316 00:21:11.161467  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.161479  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:11.161487  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:11.161562  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:11.205317  124077 cri.go:89] found id: ""
	I0316 00:21:11.205345  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.205356  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:11.205363  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:11.205424  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:11.247643  124077 cri.go:89] found id: ""
	I0316 00:21:11.247676  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.247689  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:11.247705  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:11.247769  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:11.290355  124077 cri.go:89] found id: ""
	I0316 00:21:11.290376  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.290385  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:11.290394  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:11.290465  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:11.327067  124077 cri.go:89] found id: ""
	I0316 00:21:11.327104  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.327114  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:11.327123  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:11.327187  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:11.366729  124077 cri.go:89] found id: ""
	I0316 00:21:11.366762  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.366773  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:11.366781  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:11.366846  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:11.405344  124077 cri.go:89] found id: ""
	I0316 00:21:11.405367  124077 logs.go:276] 0 containers: []
	W0316 00:21:11.405374  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:11.405384  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:11.405396  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:11.493778  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:11.493823  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:11.540055  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:11.540093  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:11.597597  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:11.597635  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:11.612436  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:11.612478  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:11.690679  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:14.191119  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:14.207248  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:14.207342  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:14.246503  124077 cri.go:89] found id: ""
	I0316 00:21:14.246544  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.246558  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:14.246568  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:14.246642  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:14.288305  124077 cri.go:89] found id: ""
	I0316 00:21:14.288337  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.288348  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:14.288355  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:14.288423  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:14.325803  124077 cri.go:89] found id: ""
	I0316 00:21:14.325846  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.325857  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:14.325865  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:14.325933  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:14.363494  124077 cri.go:89] found id: ""
	I0316 00:21:14.363531  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.363543  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:14.363551  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:14.363627  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:14.401457  124077 cri.go:89] found id: ""
	I0316 00:21:14.401500  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.401510  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:14.401517  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:14.401588  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:14.440911  124077 cri.go:89] found id: ""
	I0316 00:21:14.440944  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.440956  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:14.440965  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:14.441038  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:14.476691  124077 cri.go:89] found id: ""
	I0316 00:21:14.476733  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.476742  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:14.476747  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:14.476815  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:14.514693  124077 cri.go:89] found id: ""
	I0316 00:21:14.514723  124077 logs.go:276] 0 containers: []
	W0316 00:21:14.514735  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:14.514746  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:14.514763  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:14.594849  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:14.594895  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:14.638166  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:14.638203  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:14.692738  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:14.692779  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:14.715361  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:14.715390  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:14.820557  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:12.278383  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.279769  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:12.124356  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:14.619164  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.321422  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:17.336303  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:17.336386  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:17.386053  124077 cri.go:89] found id: ""
	I0316 00:21:17.386083  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.386092  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:17.386098  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:17.386161  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:17.425777  124077 cri.go:89] found id: ""
	I0316 00:21:17.425808  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.425820  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:17.425827  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:17.425895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:17.465127  124077 cri.go:89] found id: ""
	I0316 00:21:17.465158  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.465169  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:17.465177  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:17.465235  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:17.503288  124077 cri.go:89] found id: ""
	I0316 00:21:17.503315  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.503336  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:17.503344  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:17.503404  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:17.538761  124077 cri.go:89] found id: ""
	I0316 00:21:17.538789  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.538798  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:17.538806  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:17.538863  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:17.576740  124077 cri.go:89] found id: ""
	I0316 00:21:17.576774  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.576785  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:17.576794  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:17.576866  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:17.615945  124077 cri.go:89] found id: ""
	I0316 00:21:17.615970  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.615977  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:17.615983  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:17.616029  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:17.653815  124077 cri.go:89] found id: ""
	I0316 00:21:17.653851  124077 logs.go:276] 0 containers: []
	W0316 00:21:17.653862  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:17.653874  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:17.653898  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:17.739925  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:17.739975  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:17.786158  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:17.786190  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:17.842313  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:17.842358  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:17.857473  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:17.857500  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:17.930972  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:16.777597  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.277188  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:17.119492  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:19.119935  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:20.431560  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:20.449764  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:20.449849  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:20.511074  124077 cri.go:89] found id: ""
	I0316 00:21:20.511106  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.511117  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:20.511127  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:20.511199  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:20.587497  124077 cri.go:89] found id: ""
	I0316 00:21:20.587525  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.587535  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:20.587542  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:20.587606  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:20.627888  124077 cri.go:89] found id: ""
	I0316 00:21:20.627922  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.627933  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:20.627942  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:20.628005  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:20.664946  124077 cri.go:89] found id: ""
	I0316 00:21:20.664974  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.664985  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:20.664992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:20.665064  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:20.706140  124077 cri.go:89] found id: ""
	I0316 00:21:20.706175  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.706186  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:20.706193  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:20.706256  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:20.749871  124077 cri.go:89] found id: ""
	I0316 00:21:20.749899  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.749911  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:20.749918  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:20.750006  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:20.793976  124077 cri.go:89] found id: ""
	I0316 00:21:20.794011  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.794022  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:20.794029  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:20.794094  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:20.840141  124077 cri.go:89] found id: ""
	I0316 00:21:20.840167  124077 logs.go:276] 0 containers: []
	W0316 00:21:20.840176  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:20.840186  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:20.840199  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:20.918756  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:20.918794  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:20.961396  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:20.961434  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.020371  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:21.020413  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:21.036298  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:21.036340  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:21.118772  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:23.619021  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:23.633815  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:23.633895  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:23.678567  124077 cri.go:89] found id: ""
	I0316 00:21:23.678604  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.678616  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:21:23.678623  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:23.678687  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:23.719209  124077 cri.go:89] found id: ""
	I0316 00:21:23.719240  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.719249  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:21:23.719255  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:23.719308  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:23.757949  124077 cri.go:89] found id: ""
	I0316 00:21:23.757977  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.757985  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:21:23.757992  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:23.758044  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:23.801271  124077 cri.go:89] found id: ""
	I0316 00:21:23.801305  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.801314  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:21:23.801319  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:23.801384  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.844489  124077 cri.go:89] found id: ""
	I0316 00:21:23.844530  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.844543  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:21:23.844553  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.844667  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.889044  124077 cri.go:89] found id: ""
	I0316 00:21:23.889075  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.889084  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:21:23.889091  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.889166  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.930232  124077 cri.go:89] found id: ""
	I0316 00:21:23.930263  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.930276  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.930285  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:21:23.930351  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:21:23.970825  124077 cri.go:89] found id: ""
	I0316 00:21:23.970858  124077 logs.go:276] 0 containers: []
	W0316 00:21:23.970869  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:21:23.970881  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.970899  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.988057  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:23.988101  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:21:24.083264  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:21:24.083297  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:24.083314  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:24.164775  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.164819  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.213268  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:24.213305  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:21.278136  123819 pod_ready.go:102] pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:22.779721  123819 pod_ready.go:81] duration metric: took 4m0.010022344s for pod "metrics-server-57f55c9bc5-cm878" in "kube-system" namespace to be "Ready" ...
	E0316 00:21:22.779752  123819 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0316 00:21:22.779762  123819 pod_ready.go:38] duration metric: took 4m5.913207723s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:21:22.779779  123819 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:21:22.779814  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:22.779876  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:22.836022  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:22.836058  123819 cri.go:89] found id: ""
	I0316 00:21:22.836069  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:22.836131  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.841289  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:22.841362  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:22.883980  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:22.884007  123819 cri.go:89] found id: ""
	I0316 00:21:22.884018  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:22.884084  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.889352  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:22.889427  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:22.929947  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:22.929977  123819 cri.go:89] found id: ""
	I0316 00:21:22.929987  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:22.930033  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.935400  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:22.935485  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:22.975548  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:22.975580  123819 cri.go:89] found id: ""
	I0316 00:21:22.975598  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:22.975671  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:22.981916  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:22.981998  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:23.019925  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.019965  123819 cri.go:89] found id: ""
	I0316 00:21:23.019977  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:23.020046  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.024870  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:23.024960  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:23.068210  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.068241  123819 cri.go:89] found id: ""
	I0316 00:21:23.068253  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:23.068344  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.073492  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:23.073578  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:23.113267  123819 cri.go:89] found id: ""
	I0316 00:21:23.113301  123819 logs.go:276] 0 containers: []
	W0316 00:21:23.113311  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:23.113319  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:23.113382  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:23.160155  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:23.160175  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.160179  123819 cri.go:89] found id: ""
	I0316 00:21:23.160192  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:23.160241  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.165125  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:23.169508  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:23.169530  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:23.218749  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:23.218786  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:23.274140  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:23.274177  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:23.320515  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:23.320559  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:23.835119  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:23.835173  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:23.907635  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:23.907691  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:23.925071  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:23.925126  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:23.991996  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:23.992028  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:24.032865  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:24.032899  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:24.090947  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:24.090987  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:24.285862  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:24.285896  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:24.337983  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:24.338027  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:24.379626  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:24.379657  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:21.618894  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:24.122648  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:26.781593  124077 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.796483  124077 kubeadm.go:591] duration metric: took 4m4.585906419s to restartPrimaryControlPlane
	W0316 00:21:26.796581  124077 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:21:26.796620  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:21:26.918844  123819 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:21:26.938014  123819 api_server.go:72] duration metric: took 4m17.276244s to wait for apiserver process to appear ...
	I0316 00:21:26.938053  123819 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:21:26.938095  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:26.938157  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:26.983515  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:26.983538  123819 cri.go:89] found id: ""
	I0316 00:21:26.983546  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:26.983595  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:26.989278  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:26.989341  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:27.039968  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.040000  123819 cri.go:89] found id: ""
	I0316 00:21:27.040009  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:27.040078  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.045617  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:27.045687  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:27.085920  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.085948  123819 cri.go:89] found id: ""
	I0316 00:21:27.085960  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:27.086029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.090911  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:27.090989  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:27.137289  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:27.137322  123819 cri.go:89] found id: ""
	I0316 00:21:27.137333  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:27.137393  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.141956  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:27.142031  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:27.180823  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.180845  123819 cri.go:89] found id: ""
	I0316 00:21:27.180854  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:27.180919  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.185439  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:27.185523  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:27.225775  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:27.225797  123819 cri.go:89] found id: ""
	I0316 00:21:27.225805  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:27.225854  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.230648  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:27.230717  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:27.269429  123819 cri.go:89] found id: ""
	I0316 00:21:27.269465  123819 logs.go:276] 0 containers: []
	W0316 00:21:27.269477  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:27.269485  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:27.269550  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:27.308288  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.308316  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.308321  123819 cri.go:89] found id: ""
	I0316 00:21:27.308329  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:27.308378  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.312944  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:27.317794  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:27.317829  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:27.364287  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:27.364323  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:27.419482  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:27.419521  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:27.468553  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:27.468585  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:27.513287  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:27.513320  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:27.561382  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:27.561426  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:27.601292  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:27.601325  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:27.656848  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:27.656902  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:27.796212  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:27.796245  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:28.246569  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:28.246611  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:28.302971  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:28.303015  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:28.359613  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:28.359645  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:28.375844  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:28.375877  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:26.124217  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:28.619599  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:31.581925  124077 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (4.785270941s)
	I0316 00:21:31.582012  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:31.600474  124077 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:21:31.613775  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:21:31.626324  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:21:31.626349  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:21:31.626405  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:21:31.637292  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:21:31.637450  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:21:31.648611  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:21:31.659562  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:21:31.659639  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:21:31.670691  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.680786  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:21:31.680861  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:21:31.692150  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:21:31.703506  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:21:31.703574  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:21:31.714387  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:21:31.790886  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:21:31.790944  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:21:31.978226  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:21:31.978378  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:21:31.978524  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:21:32.184780  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:21:32.186747  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:21:32.186848  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:21:32.186940  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:21:32.187045  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:21:32.187126  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:21:32.187256  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:21:32.187359  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:21:32.187447  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:21:32.187527  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:21:32.187623  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:21:32.187716  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:21:32.187771  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:21:32.187827  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:21:32.389660  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:21:32.542791  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:21:32.725548  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:21:33.182865  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:21:33.197784  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:21:33.198953  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:21:33.199022  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:21:33.342898  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:21:30.921320  123819 api_server.go:253] Checking apiserver healthz at https://192.168.72.198:8444/healthz ...
	I0316 00:21:30.926064  123819 api_server.go:279] https://192.168.72.198:8444/healthz returned 200:
	ok
	I0316 00:21:30.927332  123819 api_server.go:141] control plane version: v1.28.4
	I0316 00:21:30.927353  123819 api_server.go:131] duration metric: took 3.989292523s to wait for apiserver health ...
	I0316 00:21:30.927361  123819 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:21:30.927386  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:21:30.927438  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:21:30.975348  123819 cri.go:89] found id: "1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:30.975376  123819 cri.go:89] found id: ""
	I0316 00:21:30.975389  123819 logs.go:276] 1 containers: [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012]
	I0316 00:21:30.975459  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:30.980128  123819 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:21:30.980194  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:21:31.029534  123819 cri.go:89] found id: "472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.029563  123819 cri.go:89] found id: ""
	I0316 00:21:31.029574  123819 logs.go:276] 1 containers: [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb]
	I0316 00:21:31.029627  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.034066  123819 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:21:31.034149  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:21:31.073857  123819 cri.go:89] found id: "9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.073884  123819 cri.go:89] found id: ""
	I0316 00:21:31.073892  123819 logs.go:276] 1 containers: [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26]
	I0316 00:21:31.073961  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.078421  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:21:31.078501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:21:31.117922  123819 cri.go:89] found id: "06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.117951  123819 cri.go:89] found id: ""
	I0316 00:21:31.117964  123819 logs.go:276] 1 containers: [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a]
	I0316 00:21:31.118029  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.122435  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:21:31.122501  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:21:31.161059  123819 cri.go:89] found id: "81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.161089  123819 cri.go:89] found id: ""
	I0316 00:21:31.161101  123819 logs.go:276] 1 containers: [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6]
	I0316 00:21:31.161155  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.165503  123819 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:21:31.165572  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:21:31.207637  123819 cri.go:89] found id: "1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.207669  123819 cri.go:89] found id: ""
	I0316 00:21:31.207679  123819 logs.go:276] 1 containers: [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57]
	I0316 00:21:31.207742  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.212296  123819 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:21:31.212360  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:21:31.251480  123819 cri.go:89] found id: ""
	I0316 00:21:31.251519  123819 logs.go:276] 0 containers: []
	W0316 00:21:31.251530  123819 logs.go:278] No container was found matching "kindnet"
	I0316 00:21:31.251539  123819 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0316 00:21:31.251608  123819 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0316 00:21:31.296321  123819 cri.go:89] found id: "663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.296345  123819 cri.go:89] found id: "4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.296350  123819 cri.go:89] found id: ""
	I0316 00:21:31.296357  123819 logs.go:276] 2 containers: [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335]
	I0316 00:21:31.296414  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.302159  123819 ssh_runner.go:195] Run: which crictl
	I0316 00:21:31.306501  123819 logs.go:123] Gathering logs for kube-proxy [81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6] ...
	I0316 00:21:31.306526  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 81911669b085515a8ee6612ab8179b8a59c2131711115be9c1620e604c89e1d6"
	I0316 00:21:31.348347  123819 logs.go:123] Gathering logs for storage-provisioner [4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335] ...
	I0316 00:21:31.348379  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4ed399796d792298763dc54386d73ce0f88aff474c459cbdd13518196ec19335"
	I0316 00:21:31.388542  123819 logs.go:123] Gathering logs for container status ...
	I0316 00:21:31.388573  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0316 00:21:31.439926  123819 logs.go:123] Gathering logs for kubelet ...
	I0316 00:21:31.439962  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:21:31.499674  123819 logs.go:123] Gathering logs for kube-apiserver [1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012] ...
	I0316 00:21:31.499711  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ea844db702630390f42cfc881939160fdbc778ca75a738627bf9b8e905d6012"
	I0316 00:21:31.552720  123819 logs.go:123] Gathering logs for etcd [472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb] ...
	I0316 00:21:31.552771  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 472e7252cc27d1b8f65b31ddaf7d0da5c94758921c17dfd32ea4ad32e2f82cfb"
	I0316 00:21:31.605281  123819 logs.go:123] Gathering logs for coredns [9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26] ...
	I0316 00:21:31.605331  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9d8b76dc25828df433da283ab204deaff6de37b37268ab68fd5336d17e16ff26"
	I0316 00:21:31.651964  123819 logs.go:123] Gathering logs for kube-scheduler [06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a] ...
	I0316 00:21:31.651997  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 06a79188858d0318fb8fb04fa28d4e9717601965129a98524f0d2bf16127c36a"
	I0316 00:21:31.696113  123819 logs.go:123] Gathering logs for kube-controller-manager [1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57] ...
	I0316 00:21:31.696150  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1d277e87ef30677e1837acb77186fbc437a5d8dd89b6a66ac3aeb7d2f720aa57"
	I0316 00:21:31.749712  123819 logs.go:123] Gathering logs for storage-provisioner [663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c] ...
	I0316 00:21:31.749751  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 663378c6a7e6d23a9471b86b002b693a30a523dbb6f02cf96e84ce958da88f4c"
	I0316 00:21:31.801476  123819 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:21:31.801508  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:21:32.236105  123819 logs.go:123] Gathering logs for dmesg ...
	I0316 00:21:32.236146  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:21:32.253815  123819 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:21:32.253848  123819 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0316 00:21:34.930730  123819 system_pods.go:59] 8 kube-system pods found
	I0316 00:21:34.930759  123819 system_pods.go:61] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.930763  123819 system_pods.go:61] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.930767  123819 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.930772  123819 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.930775  123819 system_pods.go:61] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.930778  123819 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.930783  123819 system_pods.go:61] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.930788  123819 system_pods.go:61] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.930798  123819 system_pods.go:74] duration metric: took 4.003426137s to wait for pod list to return data ...
	I0316 00:21:34.930807  123819 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:21:34.933462  123819 default_sa.go:45] found service account: "default"
	I0316 00:21:34.933492  123819 default_sa.go:55] duration metric: took 2.674728ms for default service account to be created ...
	I0316 00:21:34.933500  123819 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:21:34.939351  123819 system_pods.go:86] 8 kube-system pods found
	I0316 00:21:34.939382  123819 system_pods.go:89] "coredns-5dd5756b68-w9fx2" [2d2fba6b-c237-4590-b025-bd92eda84778] Running
	I0316 00:21:34.939393  123819 system_pods.go:89] "etcd-default-k8s-diff-port-313436" [841e5810-73db-4105-be7b-bf12d208c0bc] Running
	I0316 00:21:34.939400  123819 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-313436" [8861d2b9-ab99-48f3-8033-a6300f75724d] Running
	I0316 00:21:34.939406  123819 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-313436" [0b4ab37c-a675-469d-a221-c5a1c8b1a6f0] Running
	I0316 00:21:34.939414  123819 system_pods.go:89] "kube-proxy-btmmm" [a7f49417-ca50-4c73-b3e7-378b5efffdfe] Running
	I0316 00:21:34.939420  123819 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-313436" [2d6ce89d-6129-47c0-a513-ecd46e1acac0] Running
	I0316 00:21:34.939442  123819 system_pods.go:89] "metrics-server-57f55c9bc5-cm878" [d239b608-f098-4a69-9863-7f7134523952] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:21:34.939454  123819 system_pods.go:89] "storage-provisioner" [c272b778-0e60-4d40-826c-ddf096529b5b] Running
	I0316 00:21:34.939469  123819 system_pods.go:126] duration metric: took 5.962328ms to wait for k8s-apps to be running ...
	I0316 00:21:34.939482  123819 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:21:34.939539  123819 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:21:34.958068  123819 system_svc.go:56] duration metric: took 18.572929ms WaitForService to wait for kubelet
	I0316 00:21:34.958108  123819 kubeadm.go:576] duration metric: took 4m25.296341727s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:21:34.958130  123819 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:21:34.962603  123819 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:21:34.962629  123819 node_conditions.go:123] node cpu capacity is 2
	I0316 00:21:34.962641  123819 node_conditions.go:105] duration metric: took 4.505615ms to run NodePressure ...
	I0316 00:21:34.962657  123819 start.go:240] waiting for startup goroutines ...
	I0316 00:21:34.962667  123819 start.go:245] waiting for cluster config update ...
	I0316 00:21:34.962690  123819 start.go:254] writing updated cluster config ...
	I0316 00:21:34.963009  123819 ssh_runner.go:195] Run: rm -f paused
	I0316 00:21:35.015774  123819 start.go:600] kubectl: 1.29.3, cluster: 1.28.4 (minor skew: 1)
	I0316 00:21:35.019103  123819 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-313436" cluster and "default" namespace by default
	I0316 00:21:33.345261  124077 out.go:204]   - Booting up control plane ...
	I0316 00:21:33.345449  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:21:33.352543  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:21:33.353956  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:21:33.354926  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:21:33.358038  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:21:31.121456  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:33.122437  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:35.618906  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:37.619223  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:40.120743  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:42.619309  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:44.619544  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:47.120179  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:49.619419  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:52.124510  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:54.125147  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:56.621651  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:21:59.120895  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:01.618287  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:03.620297  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:06.119870  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:08.122618  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.359735  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:22:13.360501  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:13.360794  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:10.619464  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:13.121381  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.361680  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:18.361925  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:15.619590  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:18.122483  123454 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace has status "Ready":"False"
	I0316 00:22:19.112568  123454 pod_ready.go:81] duration metric: took 4m0.000767313s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" ...
	E0316 00:22:19.112600  123454 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hffvp" in "kube-system" namespace to be "Ready" (will not retry!)
	I0316 00:22:19.112621  123454 pod_ready.go:38] duration metric: took 4m15.544198169s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:22:19.112652  123454 kubeadm.go:591] duration metric: took 4m23.072115667s to restartPrimaryControlPlane
	W0316 00:22:19.112713  123454 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0316 00:22:19.112769  123454 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:22:28.362165  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:28.362420  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:48.363255  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:22:48.363585  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:22:51.249327  123454 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.136527598s)
	I0316 00:22:51.249406  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:22:51.268404  123454 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0316 00:22:51.280832  123454 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:22:51.292639  123454 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:22:51.292661  123454 kubeadm.go:156] found existing configuration files:
	
	I0316 00:22:51.292712  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:22:51.303272  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:22:51.303347  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:22:51.313854  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:22:51.324290  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:22:51.324361  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:22:51.334879  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.345302  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:22:51.345382  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:22:51.355682  123454 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:22:51.366601  123454 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:22:51.366660  123454 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:22:51.377336  123454 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:22:51.594624  123454 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:00.473055  123454 kubeadm.go:309] [init] Using Kubernetes version: v1.29.0-rc.2
	I0316 00:23:00.473140  123454 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:00.473255  123454 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:00.473415  123454 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:00.473551  123454 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:00.473682  123454 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:00.475591  123454 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:00.475704  123454 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:00.475803  123454 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:00.475905  123454 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:00.476001  123454 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:00.476100  123454 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:00.476190  123454 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:00.476281  123454 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:00.476378  123454 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:00.476516  123454 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:00.476647  123454 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:00.476715  123454 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:00.476801  123454 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:00.476879  123454 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:00.476968  123454 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0316 00:23:00.477042  123454 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:00.477166  123454 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:00.477253  123454 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:00.477378  123454 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:00.477480  123454 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:00.479084  123454 out.go:204]   - Booting up control plane ...
	I0316 00:23:00.479206  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:00.479332  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:00.479440  123454 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:00.479541  123454 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:00.479625  123454 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:00.479697  123454 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:00.479874  123454 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:23:00.479994  123454 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.003092 seconds
	I0316 00:23:00.480139  123454 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0316 00:23:00.480339  123454 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0316 00:23:00.480445  123454 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0316 00:23:00.480687  123454 kubeadm.go:309] [mark-control-plane] Marking the node no-preload-238598 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0316 00:23:00.480789  123454 kubeadm.go:309] [bootstrap-token] Using token: aspuu8.i4yhgkjx7e43mgmn
	I0316 00:23:00.482437  123454 out.go:204]   - Configuring RBAC rules ...
	I0316 00:23:00.482568  123454 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0316 00:23:00.482697  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0316 00:23:00.482917  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0316 00:23:00.483119  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0316 00:23:00.483283  123454 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0316 00:23:00.483406  123454 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0316 00:23:00.483582  123454 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0316 00:23:00.483653  123454 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0316 00:23:00.483714  123454 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0316 00:23:00.483720  123454 kubeadm.go:309] 
	I0316 00:23:00.483815  123454 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0316 00:23:00.483833  123454 kubeadm.go:309] 
	I0316 00:23:00.483973  123454 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0316 00:23:00.483986  123454 kubeadm.go:309] 
	I0316 00:23:00.484014  123454 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0316 00:23:00.484119  123454 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0316 00:23:00.484200  123454 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0316 00:23:00.484211  123454 kubeadm.go:309] 
	I0316 00:23:00.484283  123454 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0316 00:23:00.484288  123454 kubeadm.go:309] 
	I0316 00:23:00.484360  123454 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0316 00:23:00.484366  123454 kubeadm.go:309] 
	I0316 00:23:00.484452  123454 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0316 00:23:00.484560  123454 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0316 00:23:00.484657  123454 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0316 00:23:00.484666  123454 kubeadm.go:309] 
	I0316 00:23:00.484798  123454 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0316 00:23:00.484920  123454 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0316 00:23:00.484932  123454 kubeadm.go:309] 
	I0316 00:23:00.485053  123454 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485196  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b \
	I0316 00:23:00.485227  123454 kubeadm.go:309] 	--control-plane 
	I0316 00:23:00.485241  123454 kubeadm.go:309] 
	I0316 00:23:00.485357  123454 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0316 00:23:00.485367  123454 kubeadm.go:309] 
	I0316 00:23:00.485488  123454 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token aspuu8.i4yhgkjx7e43mgmn \
	I0316 00:23:00.485646  123454 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:11e954114ab4c098a451a040f654de3bc76e1d852b3855abe3c71439ebbb803b 
	I0316 00:23:00.485661  123454 cni.go:84] Creating CNI manager for ""
	I0316 00:23:00.485671  123454 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0316 00:23:00.487417  123454 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0316 00:23:00.489063  123454 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0316 00:23:00.526147  123454 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0316 00:23:00.571796  123454 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:00.571893  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-238598 minikube.k8s.io/updated_at=2024_03_16T00_23_00_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c798c049fb6528bb550c261d9b232f816eafc04e minikube.k8s.io/name=no-preload-238598 minikube.k8s.io/primary=true
	I0316 00:23:00.892908  123454 ops.go:34] apiserver oom_adj: -16
	I0316 00:23:00.892994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.394077  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:01.893097  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.393114  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:02.893994  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.393930  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:03.893428  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.393822  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:04.893810  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.393999  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:05.893998  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.393104  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:06.893725  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.393873  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:07.893432  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.394054  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:08.893595  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.393109  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:09.893621  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.393322  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:10.894024  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.393711  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:11.893465  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.393059  123454 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0316 00:23:12.497890  123454 kubeadm.go:1107] duration metric: took 11.926069028s to wait for elevateKubeSystemPrivileges
	W0316 00:23:12.497951  123454 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0316 00:23:12.497962  123454 kubeadm.go:393] duration metric: took 5m16.508852945s to StartCluster
	I0316 00:23:12.497988  123454 settings.go:142] acquiring lock: {Name:mk9028b5676e7f2fe61a6f7bdc2d165b459935e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.498139  123454 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:23:12.500632  123454 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/kubeconfig: {Name:mk2c2ccada7d7bb6083d9ed0f82fcffeb66532a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0316 00:23:12.500995  123454 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.137 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0316 00:23:12.502850  123454 out.go:177] * Verifying Kubernetes components...
	I0316 00:23:12.501089  123454 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0316 00:23:12.501233  123454 config.go:182] Loaded profile config "no-preload-238598": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0316 00:23:12.504432  123454 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0316 00:23:12.504443  123454 addons.go:69] Setting storage-provisioner=true in profile "no-preload-238598"
	I0316 00:23:12.504491  123454 addons.go:234] Setting addon storage-provisioner=true in "no-preload-238598"
	I0316 00:23:12.504502  123454 addons.go:69] Setting default-storageclass=true in profile "no-preload-238598"
	I0316 00:23:12.504515  123454 addons.go:69] Setting metrics-server=true in profile "no-preload-238598"
	I0316 00:23:12.504526  123454 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-238598"
	I0316 00:23:12.504541  123454 addons.go:234] Setting addon metrics-server=true in "no-preload-238598"
	W0316 00:23:12.504551  123454 addons.go:243] addon metrics-server should already be in state true
	I0316 00:23:12.504582  123454 host.go:66] Checking if "no-preload-238598" exists ...
	W0316 00:23:12.504505  123454 addons.go:243] addon storage-provisioner should already be in state true
	I0316 00:23:12.504656  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.504996  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505012  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.505013  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.504963  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.505229  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.521634  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34591
	I0316 00:23:12.521698  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38305
	I0316 00:23:12.522283  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522377  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.522836  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.522861  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.522990  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.523032  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.523203  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523375  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.523737  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.523758  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524232  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.524277  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.524695  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43355
	I0316 00:23:12.525112  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.525610  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.525637  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.526025  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.526218  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.530010  123454 addons.go:234] Setting addon default-storageclass=true in "no-preload-238598"
	W0316 00:23:12.530029  123454 addons.go:243] addon default-storageclass should already be in state true
	I0316 00:23:12.530053  123454 host.go:66] Checking if "no-preload-238598" exists ...
	I0316 00:23:12.530277  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.530315  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.540310  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45423
	I0316 00:23:12.545850  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.545966  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38599
	I0316 00:23:12.546335  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.546740  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.546761  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.547035  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.547232  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.548605  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.548626  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.549001  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.549058  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35799
	I0316 00:23:12.549268  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.549323  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.549454  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.551419  123454 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0316 00:23:12.549975  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.551115  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.553027  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0316 00:23:12.553050  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0316 00:23:12.553074  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.553082  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.554948  123454 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0316 00:23:12.553404  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.556096  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556544  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.556568  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.556640  123454 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.556660  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0316 00:23:12.556679  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.556769  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.557150  123454 main.go:141] libmachine: Found binary path at /home/jenkins/minikube-integration/17991-75602/.minikube/bin/docker-machine-driver-kvm2
	I0316 00:23:12.557176  123454 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0316 00:23:12.557398  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.557600  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.557886  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.560220  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560555  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.560582  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.560759  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.560982  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.561157  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.561318  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.574877  123454 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34499
	I0316 00:23:12.575802  123454 main.go:141] libmachine: () Calling .GetVersion
	I0316 00:23:12.576313  123454 main.go:141] libmachine: Using API Version  1
	I0316 00:23:12.576337  123454 main.go:141] libmachine: () Calling .SetConfigRaw
	I0316 00:23:12.576640  123454 main.go:141] libmachine: () Calling .GetMachineName
	I0316 00:23:12.577015  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetState
	I0316 00:23:12.578483  123454 main.go:141] libmachine: (no-preload-238598) Calling .DriverName
	I0316 00:23:12.578814  123454 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.578835  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0316 00:23:12.578856  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHHostname
	I0316 00:23:12.581832  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582439  123454 main.go:141] libmachine: (no-preload-238598) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:67:85:15", ip: ""} in network mk-no-preload-238598: {Iface:virbr2 ExpiryTime:2024-03-16 01:17:25 +0000 UTC Type:0 Mac:52:54:00:67:85:15 Iaid: IPaddr:192.168.50.137 Prefix:24 Hostname:no-preload-238598 Clientid:01:52:54:00:67:85:15}
	I0316 00:23:12.582454  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHPort
	I0316 00:23:12.582465  123454 main.go:141] libmachine: (no-preload-238598) DBG | domain no-preload-238598 has defined IP address 192.168.50.137 and MAC address 52:54:00:67:85:15 in network mk-no-preload-238598
	I0316 00:23:12.582635  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHKeyPath
	I0316 00:23:12.582819  123454 main.go:141] libmachine: (no-preload-238598) Calling .GetSSHUsername
	I0316 00:23:12.582969  123454 sshutil.go:53] new ssh client: &{IP:192.168.50.137 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/no-preload-238598/id_rsa Username:docker}
	I0316 00:23:12.729051  123454 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0316 00:23:12.747162  123454 node_ready.go:35] waiting up to 6m0s for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.759957  123454 node_ready.go:49] node "no-preload-238598" has status "Ready":"True"
	I0316 00:23:12.759992  123454 node_ready.go:38] duration metric: took 12.79378ms for node "no-preload-238598" to be "Ready" ...
	I0316 00:23:12.760006  123454 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.772201  123454 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795626  123454 pod_ready.go:92] pod "etcd-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.795660  123454 pod_ready.go:81] duration metric: took 23.429082ms for pod "etcd-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.795674  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808661  123454 pod_ready.go:92] pod "kube-apiserver-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.808688  123454 pod_ready.go:81] duration metric: took 13.006568ms for pod "kube-apiserver-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.808699  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821578  123454 pod_ready.go:92] pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.821613  123454 pod_ready.go:81] duration metric: took 12.904651ms for pod "kube-controller-manager-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.821627  123454 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.832585  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0316 00:23:12.832616  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0316 00:23:12.838375  123454 pod_ready.go:92] pod "kube-scheduler-no-preload-238598" in "kube-system" namespace has status "Ready":"True"
	I0316 00:23:12.838404  123454 pod_ready.go:81] duration metric: took 16.768452ms for pod "kube-scheduler-no-preload-238598" in "kube-system" namespace to be "Ready" ...
	I0316 00:23:12.838415  123454 pod_ready.go:38] duration metric: took 78.396172ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0316 00:23:12.838435  123454 api_server.go:52] waiting for apiserver process to appear ...
	I0316 00:23:12.838522  123454 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0316 00:23:12.889063  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0316 00:23:12.907225  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0316 00:23:12.924533  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0316 00:23:12.924565  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0316 00:23:12.947224  123454 api_server.go:72] duration metric: took 446.183679ms to wait for apiserver process to appear ...
	I0316 00:23:12.947257  123454 api_server.go:88] waiting for apiserver healthz status ...
	I0316 00:23:12.947281  123454 api_server.go:253] Checking apiserver healthz at https://192.168.50.137:8443/healthz ...
	I0316 00:23:12.975463  123454 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:12.975495  123454 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0316 00:23:13.023702  123454 api_server.go:279] https://192.168.50.137:8443/healthz returned 200:
	ok
	I0316 00:23:13.039598  123454 api_server.go:141] control plane version: v1.29.0-rc.2
	I0316 00:23:13.039638  123454 api_server.go:131] duration metric: took 92.372403ms to wait for apiserver health ...
	I0316 00:23:13.039649  123454 system_pods.go:43] waiting for kube-system pods to appear ...
	I0316 00:23:13.069937  123454 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0316 00:23:13.141358  123454 system_pods.go:59] 5 kube-system pods found
	I0316 00:23:13.141387  123454 system_pods.go:61] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.141391  123454 system_pods.go:61] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.141397  123454 system_pods.go:61] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.141400  123454 system_pods.go:61] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending
	I0316 00:23:13.141404  123454 system_pods.go:61] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.141411  123454 system_pods.go:74] duration metric: took 101.754765ms to wait for pod list to return data ...
	I0316 00:23:13.141419  123454 default_sa.go:34] waiting for default service account to be created ...
	I0316 00:23:13.200153  123454 default_sa.go:45] found service account: "default"
	I0316 00:23:13.200193  123454 default_sa.go:55] duration metric: took 58.765381ms for default service account to be created ...
	I0316 00:23:13.200205  123454 system_pods.go:116] waiting for k8s-apps to be running ...
	I0316 00:23:13.381398  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381431  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.381771  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.381825  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.381840  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.381849  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.381862  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.382154  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.382159  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.382189  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.383303  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.383345  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.383353  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending
	I0316 00:23:13.383360  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.383368  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.383374  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.383384  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.383396  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.383440  123454 retry.go:31] will retry after 221.286986ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.408809  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:13.408839  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:13.409146  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:13.409191  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:13.409195  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:13.612171  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.612205  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612212  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.612221  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.612226  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.612230  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.612236  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.612239  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.612260  123454 retry.go:31] will retry after 311.442515ms: missing components: kube-dns, kube-proxy
	I0316 00:23:13.934136  123454 system_pods.go:86] 7 kube-system pods found
	I0316 00:23:13.934170  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934177  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:13.934185  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:13.934191  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:13.934197  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:13.934204  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:13.934210  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:13.934234  123454 retry.go:31] will retry after 453.147474ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.343055  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.435784176s)
	I0316 00:23:14.343123  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343139  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343497  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343523  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.343540  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.343554  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.343800  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.343876  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.343895  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.404681  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.404725  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404738  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.404748  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.404758  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.404767  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.404777  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.404790  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.404810  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.404821  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending
	I0316 00:23:14.404846  123454 retry.go:31] will retry after 464.575803ms: missing components: kube-dns, kube-proxy
	I0316 00:23:14.447649  123454 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.377663696s)
	I0316 00:23:14.447706  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.447724  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448062  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448083  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448092  123454 main.go:141] libmachine: Making call to close driver server
	I0316 00:23:14.448100  123454 main.go:141] libmachine: (no-preload-238598) Calling .Close
	I0316 00:23:14.448367  123454 main.go:141] libmachine: (no-preload-238598) DBG | Closing plugin on server side
	I0316 00:23:14.448367  123454 main.go:141] libmachine: Successfully made call to close driver server
	I0316 00:23:14.448394  123454 main.go:141] libmachine: Making call to close connection to plugin binary
	I0316 00:23:14.448407  123454 addons.go:470] Verifying addon metrics-server=true in "no-preload-238598"
	I0316 00:23:14.450675  123454 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0316 00:23:14.452378  123454 addons.go:505] duration metric: took 1.951301533s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0316 00:23:14.888167  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:14.888206  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:14.888219  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0316 00:23:14.888226  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:14.888236  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:14.888243  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:14.888252  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0316 00:23:14.888260  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:14.888292  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:14.888301  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:14.888325  123454 retry.go:31] will retry after 490.515879ms: missing components: kube-proxy
	I0316 00:23:15.389667  123454 system_pods.go:86] 9 kube-system pods found
	I0316 00:23:15.389694  123454 system_pods.go:89] "coredns-76f75df574-5drh8" [e86d8e6a-f832-4364-ac68-c69e40b92523] Running
	I0316 00:23:15.389700  123454 system_pods.go:89] "coredns-76f75df574-wg5c8" [a7347306-ab8d-42d0-935c-98f98192e6b7] Running
	I0316 00:23:15.389704  123454 system_pods.go:89] "etcd-no-preload-238598" [423aa6e1-ead6-4f2b-a8f8-76305172bc68] Running
	I0316 00:23:15.389708  123454 system_pods.go:89] "kube-apiserver-no-preload-238598" [4c9e7e7b-bf59-4600-9ab8-8f517626b54c] Running
	I0316 00:23:15.389712  123454 system_pods.go:89] "kube-controller-manager-no-preload-238598" [2ca4b9c1-0cd9-44bf-b8c3-be4ecd6979bd] Running
	I0316 00:23:15.389716  123454 system_pods.go:89] "kube-proxy-h6p8x" [738ca90e-7f8a-4449-8e5b-df714ee8320a] Running
	I0316 00:23:15.389721  123454 system_pods.go:89] "kube-scheduler-no-preload-238598" [554bd3e5-9c3a-4381-adad-d9d0b8f68de9] Running
	I0316 00:23:15.389728  123454 system_pods.go:89] "metrics-server-57f55c9bc5-j5k5h" [cbdf6082-83fb-4af6-95e9-90545e64c898] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0316 00:23:15.389735  123454 system_pods.go:89] "storage-provisioner" [60914654-d240-4165-b045-5b411d99e2e0] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0316 00:23:15.389745  123454 system_pods.go:126] duration metric: took 2.189532563s to wait for k8s-apps to be running ...
	I0316 00:23:15.389757  123454 system_svc.go:44] waiting for kubelet service to be running ....
	I0316 00:23:15.389805  123454 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:15.409241  123454 system_svc.go:56] duration metric: took 19.469575ms WaitForService to wait for kubelet
	I0316 00:23:15.409273  123454 kubeadm.go:576] duration metric: took 2.908240245s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0316 00:23:15.409292  123454 node_conditions.go:102] verifying NodePressure condition ...
	I0316 00:23:15.412530  123454 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0316 00:23:15.412559  123454 node_conditions.go:123] node cpu capacity is 2
	I0316 00:23:15.412570  123454 node_conditions.go:105] duration metric: took 3.272979ms to run NodePressure ...
	I0316 00:23:15.412585  123454 start.go:240] waiting for startup goroutines ...
	I0316 00:23:15.412594  123454 start.go:245] waiting for cluster config update ...
	I0316 00:23:15.412608  123454 start.go:254] writing updated cluster config ...
	I0316 00:23:15.412923  123454 ssh_runner.go:195] Run: rm -f paused
	I0316 00:23:15.468245  123454 start.go:600] kubectl: 1.29.3, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0316 00:23:15.470311  123454 out.go:177] * Done! kubectl is now configured to use "no-preload-238598" cluster and "default" namespace by default
	I0316 00:23:28.365163  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:23:28.365500  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:23:28.365516  124077 kubeadm.go:309] 
	I0316 00:23:28.365551  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:23:28.365589  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:23:28.365595  124077 kubeadm.go:309] 
	I0316 00:23:28.365624  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:23:28.365653  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:23:28.365818  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:23:28.365847  124077 kubeadm.go:309] 
	I0316 00:23:28.365990  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:23:28.366056  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:23:28.366099  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:23:28.366109  124077 kubeadm.go:309] 
	I0316 00:23:28.366233  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:23:28.366348  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:23:28.366361  124077 kubeadm.go:309] 
	I0316 00:23:28.366540  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:23:28.366673  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:23:28.366763  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:23:28.366879  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:23:28.366904  124077 kubeadm.go:309] 
	I0316 00:23:28.367852  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:23:28.367989  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:23:28.368095  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0316 00:23:28.368411  124077 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0316 00:23:28.368479  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0316 00:23:28.845362  124077 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0316 00:23:28.861460  124077 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0316 00:23:28.872223  124077 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0316 00:23:28.872249  124077 kubeadm.go:156] found existing configuration files:
	
	I0316 00:23:28.872312  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0316 00:23:28.882608  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0316 00:23:28.882675  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0316 00:23:28.892345  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0316 00:23:28.901604  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0316 00:23:28.901657  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0316 00:23:28.911754  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.921370  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0316 00:23:28.921442  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0316 00:23:28.933190  124077 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0316 00:23:28.943076  124077 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0316 00:23:28.943134  124077 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0316 00:23:28.953349  124077 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0316 00:23:29.033124  124077 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0316 00:23:29.033198  124077 kubeadm.go:309] [preflight] Running pre-flight checks
	I0316 00:23:29.203091  124077 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0316 00:23:29.203255  124077 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0316 00:23:29.203394  124077 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0316 00:23:29.421799  124077 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0316 00:23:29.423928  124077 out.go:204]   - Generating certificates and keys ...
	I0316 00:23:29.424050  124077 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0316 00:23:29.424136  124077 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0316 00:23:29.424267  124077 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0316 00:23:29.424378  124077 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0316 00:23:29.424477  124077 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0316 00:23:29.424556  124077 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0316 00:23:29.424637  124077 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0316 00:23:29.424872  124077 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0316 00:23:29.425137  124077 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0316 00:23:29.425536  124077 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0316 00:23:29.425780  124077 kubeadm.go:309] [certs] Using the existing "sa" key
	I0316 00:23:29.425858  124077 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0316 00:23:29.812436  124077 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0316 00:23:29.921208  124077 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0316 00:23:29.976412  124077 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0316 00:23:30.296800  124077 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0316 00:23:30.318126  124077 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0316 00:23:30.319310  124077 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0316 00:23:30.319453  124077 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0316 00:23:30.472880  124077 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0316 00:23:30.474741  124077 out.go:204]   - Booting up control plane ...
	I0316 00:23:30.474862  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0316 00:23:30.474973  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0316 00:23:30.475073  124077 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0316 00:23:30.475407  124077 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0316 00:23:30.481663  124077 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0316 00:24:10.483886  124077 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0316 00:24:10.484273  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:10.484462  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:15.485049  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:15.485259  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:25.486291  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:25.486552  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:24:45.487553  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:24:45.487831  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.489639  124077 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0316 00:25:25.489992  124077 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0316 00:25:25.490024  124077 kubeadm.go:309] 
	I0316 00:25:25.490110  124077 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0316 00:25:25.490170  124077 kubeadm.go:309] 		timed out waiting for the condition
	I0316 00:25:25.490182  124077 kubeadm.go:309] 
	I0316 00:25:25.490225  124077 kubeadm.go:309] 	This error is likely caused by:
	I0316 00:25:25.490275  124077 kubeadm.go:309] 		- The kubelet is not running
	I0316 00:25:25.490422  124077 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0316 00:25:25.490433  124077 kubeadm.go:309] 
	I0316 00:25:25.490581  124077 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0316 00:25:25.490644  124077 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0316 00:25:25.490693  124077 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0316 00:25:25.490703  124077 kubeadm.go:309] 
	I0316 00:25:25.490813  124077 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0316 00:25:25.490942  124077 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0316 00:25:25.490957  124077 kubeadm.go:309] 
	I0316 00:25:25.491102  124077 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0316 00:25:25.491208  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0316 00:25:25.491333  124077 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0316 00:25:25.491449  124077 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0316 00:25:25.491461  124077 kubeadm.go:309] 
	I0316 00:25:25.492437  124077 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0316 00:25:25.492551  124077 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0316 00:25:25.492645  124077 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0316 00:25:25.492726  124077 kubeadm.go:393] duration metric: took 8m3.343169045s to StartCluster
	I0316 00:25:25.492812  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0316 00:25:25.492908  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0316 00:25:25.541383  124077 cri.go:89] found id: ""
	I0316 00:25:25.541452  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.541464  124077 logs.go:278] No container was found matching "kube-apiserver"
	I0316 00:25:25.541484  124077 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0316 00:25:25.541563  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0316 00:25:25.578190  124077 cri.go:89] found id: ""
	I0316 00:25:25.578224  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.578234  124077 logs.go:278] No container was found matching "etcd"
	I0316 00:25:25.578242  124077 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0316 00:25:25.578299  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0316 00:25:25.618394  124077 cri.go:89] found id: ""
	I0316 00:25:25.618423  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.618441  124077 logs.go:278] No container was found matching "coredns"
	I0316 00:25:25.618450  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0316 00:25:25.618523  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0316 00:25:25.654036  124077 cri.go:89] found id: ""
	I0316 00:25:25.654062  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.654073  124077 logs.go:278] No container was found matching "kube-scheduler"
	I0316 00:25:25.654081  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0316 00:25:25.654145  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0316 00:25:25.688160  124077 cri.go:89] found id: ""
	I0316 00:25:25.688189  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.688200  124077 logs.go:278] No container was found matching "kube-proxy"
	I0316 00:25:25.688209  124077 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0316 00:25:25.688279  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0316 00:25:25.723172  124077 cri.go:89] found id: ""
	I0316 00:25:25.723207  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.723219  124077 logs.go:278] No container was found matching "kube-controller-manager"
	I0316 00:25:25.723228  124077 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0316 00:25:25.723291  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0316 00:25:25.762280  124077 cri.go:89] found id: ""
	I0316 00:25:25.762329  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.762340  124077 logs.go:278] No container was found matching "kindnet"
	I0316 00:25:25.762348  124077 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0316 00:25:25.762426  124077 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0316 00:25:25.816203  124077 cri.go:89] found id: ""
	I0316 00:25:25.816236  124077 logs.go:276] 0 containers: []
	W0316 00:25:25.816248  124077 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0316 00:25:25.816262  124077 logs.go:123] Gathering logs for kubelet ...
	I0316 00:25:25.816280  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0316 00:25:25.872005  124077 logs.go:123] Gathering logs for dmesg ...
	I0316 00:25:25.872042  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0316 00:25:25.885486  124077 logs.go:123] Gathering logs for describe nodes ...
	I0316 00:25:25.885524  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0316 00:25:25.970263  124077 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0316 00:25:25.970293  124077 logs.go:123] Gathering logs for CRI-O ...
	I0316 00:25:25.970309  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0316 00:25:26.086251  124077 logs.go:123] Gathering logs for container status ...
	I0316 00:25:26.086292  124077 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0316 00:25:26.129325  124077 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0316 00:25:26.129381  124077 out.go:239] * 
	W0316 00:25:26.129449  124077 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.129481  124077 out.go:239] * 
	W0316 00:25:26.130315  124077 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0316 00:25:26.134349  124077 out.go:177] 
	W0316 00:25:26.135674  124077 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0316 00:25:26.135728  124077 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0316 00:25:26.135751  124077 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0316 00:25:26.137389  124077 out.go:177] 
	
	
	==> CRI-O <==
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.736114378Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549380736082503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3c923e42-9533-441e-8d21-402fe624d18f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.736851872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0000cba9-c527-43d5-b136-a9047ecdd755 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.736900631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0000cba9-c527-43d5-b136-a9047ecdd755 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.736939217Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0000cba9-c527-43d5-b136-a9047ecdd755 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.773019781Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ee7d616e-3ea0-46f8-ba27-600c25dce166 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.773090377Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ee7d616e-3ea0-46f8-ba27-600c25dce166 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.774234207Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3921c665-a554-4435-b425-4353617a5b85 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.774768978Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549380774742353,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3921c665-a554-4435-b425-4353617a5b85 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.775260253Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e837467-bccd-4cee-9293-fcb307fde5b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.775315128Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e837467-bccd-4cee-9293-fcb307fde5b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.775389786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=6e837467-bccd-4cee-9293-fcb307fde5b2 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.809605000Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e861749-ed8e-4207-ab67-3b68c43bdc20 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.809693635Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e861749-ed8e-4207-ab67-3b68c43bdc20 name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.810886975Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=0c50c6d0-c601-4585-b292-b198909ca439 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.811275623Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549380811251969,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=0c50c6d0-c601-4585-b292-b198909ca439 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.811811126Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc891a33-da34-4690-b979-cf6219b2491a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.811897863Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc891a33-da34-4690-b979-cf6219b2491a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.811935430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cc891a33-da34-4690-b979-cf6219b2491a name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.853187226Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=354b6370-bc0d-4b1f-8028-31c38e84de8b name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.853287670Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=354b6370-bc0d-4b1f-8028-31c38e84de8b name=/runtime.v1.RuntimeService/Version
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.854782142Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b006e779-2113-4022-b38a-899ccd15d40e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.855261437Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710549380855231145,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b006e779-2113-4022-b38a-899ccd15d40e name=/runtime.v1.ImageService/ImageFsInfo
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.855901397Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7322d878-2c20-448b-90fa-e079c846d751 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.855997723Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7322d878-2c20-448b-90fa-e079c846d751 name=/runtime.v1.RuntimeService/ListContainers
	Mar 16 00:36:20 old-k8s-version-402923 crio[648]: time="2024-03-16 00:36:20.856056025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=7322d878-2c20-448b-90fa-e079c846d751 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar16 00:16] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.061034] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.045188] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar16 00:17] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.786648] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.691488] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.819996] systemd-fstab-generator[569]: Ignoring "noauto" option for root device
	[  +0.063026] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069540] systemd-fstab-generator[581]: Ignoring "noauto" option for root device
	[  +0.190023] systemd-fstab-generator[595]: Ignoring "noauto" option for root device
	[  +0.172778] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.261353] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +7.077973] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +0.071596] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.890538] systemd-fstab-generator[964]: Ignoring "noauto" option for root device
	[ +12.663086] kauditd_printk_skb: 46 callbacks suppressed
	[Mar16 00:21] systemd-fstab-generator[5049]: Ignoring "noauto" option for root device
	[Mar16 00:23] systemd-fstab-generator[5330]: Ignoring "noauto" option for root device
	[  +0.068986] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 00:36:21 up 19 min,  0 users,  load average: 0.00, 0.02, 0.04
	Linux old-k8s-version-402923 5.10.207 #1 SMP Fri Mar 15 21:13:47 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/ipsock.go:280 +0x4d4
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: net.(*Resolver).resolveAddrList(0x70c5740, 0x4f7fe40, 0xc000cfc1e0, 0x48abf6d, 0x4, 0x48ab5d6, 0x3, 0xc000aff740, 0x24, 0x0, ...)
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/dial.go:221 +0x47d
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: net.(*Dialer).DialContext(0xc000bb1380, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000aff740, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/dial.go:403 +0x22b
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc000bba960, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000aff740, 0x24, 0x60, 0x7f6c04545798, 0x118, ...)
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/connrotation/connrotation.go:73 +0x7e
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: net/http.(*Transport).dial(0xc000270140, 0x4f7fe00, 0xc000052030, 0x48ab5d6, 0x3, 0xc000aff740, 0x24, 0x0, 0x0, 0x0, ...)
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/http/transport.go:1141 +0x1fd
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: net/http.(*Transport).dialConn(0xc000270140, 0x4f7fe00, 0xc000052030, 0x0, 0xc0002ec600, 0x5, 0xc000aff740, 0x24, 0x0, 0xc000caea20, ...)
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/http/transport.go:1575 +0x1abb
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: net/http.(*Transport).dialConnFor(0xc000270140, 0xc000acb8c0)
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]: created by net/http.(*Transport).queueForDial
	Mar 16 00:36:16 old-k8s-version-402923 kubelet[6773]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 16 00:36:16 old-k8s-version-402923 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 16 00:36:16 old-k8s-version-402923 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 16 00:36:16 old-k8s-version-402923 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Mar 16 00:36:16 old-k8s-version-402923 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 16 00:36:16 old-k8s-version-402923 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 16 00:36:17 old-k8s-version-402923 kubelet[6782]: I0316 00:36:17.011030    6782 server.go:416] Version: v1.20.0
	Mar 16 00:36:17 old-k8s-version-402923 kubelet[6782]: I0316 00:36:17.011440    6782 server.go:837] Client rotation is on, will bootstrap in background
	Mar 16 00:36:17 old-k8s-version-402923 kubelet[6782]: I0316 00:36:17.013607    6782 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 16 00:36:17 old-k8s-version-402923 kubelet[6782]: W0316 00:36:17.014577    6782 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 16 00:36:17 old-k8s-version-402923 kubelet[6782]: I0316 00:36:17.014717    6782 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 2 (249.089136ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-402923" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (109.32s)

                                                
                                    

Test pass (249/319)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 11.45
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.14
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.28.4/json-events 4.19
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.07
18 TestDownloadOnly/v1.28.4/DeleteAll 0.14
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 8.43
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.07
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.14
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.57
31 TestOffline 126.6
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 143.11
38 TestAddons/parallel/Registry 14.88
40 TestAddons/parallel/InspektorGadget 11.13
42 TestAddons/parallel/HelmTiller 14.73
44 TestAddons/parallel/CSI 85.7
45 TestAddons/parallel/Headlamp 14.54
46 TestAddons/parallel/CloudSpanner 5.56
47 TestAddons/parallel/LocalPath 55.62
48 TestAddons/parallel/NvidiaDevicePlugin 7.02
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.13
54 TestCertOptions 59.07
55 TestCertExpiration 291.06
57 TestForceSystemdFlag 77.98
58 TestForceSystemdEnv 49.74
60 TestKVMDriverInstallOrUpdate 5.51
64 TestErrorSpam/setup 43.93
65 TestErrorSpam/start 0.38
66 TestErrorSpam/status 0.77
67 TestErrorSpam/pause 1.64
68 TestErrorSpam/unpause 1.72
69 TestErrorSpam/stop 5.77
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 96.01
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 41.31
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.09
81 TestFunctional/serial/CacheCmd/cache/add_local 1.93
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.68
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
89 TestFunctional/serial/ExtraConfig 33.27
90 TestFunctional/serial/ComponentHealth 0.06
91 TestFunctional/serial/LogsCmd 1.53
92 TestFunctional/serial/LogsFileCmd 1.54
93 TestFunctional/serial/InvalidService 3.97
95 TestFunctional/parallel/ConfigCmd 0.43
96 TestFunctional/parallel/DashboardCmd 10.49
97 TestFunctional/parallel/DryRun 0.28
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 0.84
103 TestFunctional/parallel/ServiceCmdConnect 10.71
104 TestFunctional/parallel/AddonsCmd 0.16
105 TestFunctional/parallel/PersistentVolumeClaim 40.38
107 TestFunctional/parallel/SSHCmd 0.46
108 TestFunctional/parallel/CpCmd 1.65
109 TestFunctional/parallel/MySQL 30.39
110 TestFunctional/parallel/FileSync 0.29
111 TestFunctional/parallel/CertSync 1.75
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.44
119 TestFunctional/parallel/License 0.21
120 TestFunctional/parallel/ProfileCmd/profile_not_create 0.35
130 TestFunctional/parallel/ProfileCmd/profile_list 0.34
131 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.34
133 TestFunctional/parallel/Version/short 0.07
134 TestFunctional/parallel/Version/components 0.88
136 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
137 TestFunctional/parallel/ImageCommands/ImageListJson 0.34
138 TestFunctional/parallel/ImageCommands/ImageListYaml 2.17
139 TestFunctional/parallel/ImageCommands/ImageBuild 3.56
140 TestFunctional/parallel/ImageCommands/Setup 1.38
141 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.19
142 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.64
143 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 8.31
144 TestFunctional/parallel/ServiceCmd/List 0.46
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.45
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.1
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.28
149 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
150 TestFunctional/parallel/ServiceCmd/Format 0.46
151 TestFunctional/parallel/ServiceCmd/URL 0.4
152 TestFunctional/parallel/MountCmd/any-port 20.69
153 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.51
154 TestFunctional/parallel/ImageCommands/ImageRemove 1.85
155 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 9.31
156 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.86
157 TestFunctional/parallel/MountCmd/specific-port 1.79
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.4
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMultiControlPlane/serial/StartCluster 220.47
166 TestMultiControlPlane/serial/DeployApp 5.76
167 TestMultiControlPlane/serial/PingHostFromPods 1.38
168 TestMultiControlPlane/serial/AddWorkerNode 48.89
169 TestMultiControlPlane/serial/NodeLabels 0.07
170 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.53
171 TestMultiControlPlane/serial/CopyFile 13.46
173 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
175 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
177 TestMultiControlPlane/serial/DeleteSecondaryNode 17.6
178 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.38
180 TestMultiControlPlane/serial/RestartCluster 349.5
181 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMultiControlPlane/serial/AddSecondaryNode 73.91
183 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.59
187 TestJSONOutput/start/Command 99.12
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.76
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.67
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.45
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 90.54
219 TestMountStart/serial/StartWithMountFirst 26.24
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 27.14
222 TestMountStart/serial/VerifyMountSecond 0.39
223 TestMountStart/serial/DeleteFirst 0.89
224 TestMountStart/serial/VerifyMountPostDelete 0.4
225 TestMountStart/serial/Stop 1.34
226 TestMountStart/serial/RestartStopped 23.91
227 TestMountStart/serial/VerifyMountPostStop 0.4
230 TestMultiNode/serial/FreshStart2Nodes 104.94
231 TestMultiNode/serial/DeployApp2Nodes 5.06
232 TestMultiNode/serial/PingHostFrom2Pods 0.91
233 TestMultiNode/serial/AddNode 41.33
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.24
236 TestMultiNode/serial/CopyFile 7.82
237 TestMultiNode/serial/StopNode 3.22
238 TestMultiNode/serial/StartAfterStop 29.34
240 TestMultiNode/serial/DeleteNode 2.41
242 TestMultiNode/serial/RestartMultiNode 172.94
243 TestMultiNode/serial/ValidateNameConflict 45.86
250 TestScheduledStopUnix 112.68
254 TestRunningBinaryUpgrade 221.97
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 96.88
261 TestNoKubernetes/serial/StartWithStopK8s 7.43
262 TestNoKubernetes/serial/Start 29.97
271 TestPause/serial/Start 105.42
272 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
273 TestNoKubernetes/serial/ProfileList 1.12
274 TestNoKubernetes/serial/Stop 1.61
275 TestNoKubernetes/serial/StartNoArgs 46.14
276 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
284 TestNetworkPlugins/group/false 3.63
289 TestStoppedBinaryUpgrade/Setup 0.44
290 TestStoppedBinaryUpgrade/Upgrade 120.17
291 TestStoppedBinaryUpgrade/MinikubeLogs 0.98
295 TestStartStop/group/no-preload/serial/FirstStart 154.31
297 TestStartStop/group/embed-certs/serial/FirstStart 127.75
299 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 57.82
300 TestStartStop/group/no-preload/serial/DeployApp 8.3
301 TestStartStop/group/embed-certs/serial/DeployApp 9.3
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.06
304 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.16
306 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.29
307 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.06
313 TestStartStop/group/no-preload/serial/SecondStart 695.26
314 TestStartStop/group/embed-certs/serial/SecondStart 567.55
316 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 544.89
317 TestStartStop/group/old-k8s-version/serial/Stop 4.3
318 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
329 TestStartStop/group/newest-cni/serial/FirstStart 58.96
330 TestNetworkPlugins/group/auto/Start 116.87
331 TestNetworkPlugins/group/kindnet/Start 85.44
332 TestStartStop/group/newest-cni/serial/DeployApp 0
333 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.45
334 TestStartStop/group/newest-cni/serial/Stop 10.42
335 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
336 TestStartStop/group/newest-cni/serial/SecondStart 83.06
337 TestNetworkPlugins/group/auto/KubeletFlags 0.21
338 TestNetworkPlugins/group/auto/NetCatPod 10.28
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.21
341 TestNetworkPlugins/group/kindnet/NetCatPod 10.23
342 TestNetworkPlugins/group/auto/DNS 0.2
343 TestNetworkPlugins/group/auto/Localhost 0.18
344 TestNetworkPlugins/group/auto/HairPin 0.14
345 TestNetworkPlugins/group/kindnet/DNS 0.18
346 TestNetworkPlugins/group/kindnet/Localhost 0.16
347 TestNetworkPlugins/group/kindnet/HairPin 0.15
348 TestNetworkPlugins/group/calico/Start 93.95
349 TestNetworkPlugins/group/custom-flannel/Start 109.11
350 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
351 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
352 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.25
353 TestStartStop/group/newest-cni/serial/Pause 3.06
354 TestNetworkPlugins/group/enable-default-cni/Start 145.9
355 TestNetworkPlugins/group/flannel/Start 159.32
356 TestNetworkPlugins/group/calico/ControllerPod 6.01
357 TestNetworkPlugins/group/calico/KubeletFlags 0.24
358 TestNetworkPlugins/group/calico/NetCatPod 11.25
359 TestNetworkPlugins/group/calico/DNS 0.21
360 TestNetworkPlugins/group/calico/Localhost 0.18
361 TestNetworkPlugins/group/calico/HairPin 0.18
362 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
363 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.33
364 TestNetworkPlugins/group/custom-flannel/DNS 0.21
365 TestNetworkPlugins/group/custom-flannel/Localhost 0.18
366 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
367 TestNetworkPlugins/group/bridge/Start 99.49
368 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
369 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
370 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
371 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
372 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
373 TestNetworkPlugins/group/flannel/ControllerPod 6.01
374 TestNetworkPlugins/group/flannel/KubeletFlags 0.22
375 TestNetworkPlugins/group/flannel/NetCatPod 11.24
376 TestNetworkPlugins/group/flannel/DNS 0.19
377 TestNetworkPlugins/group/flannel/Localhost 0.16
378 TestNetworkPlugins/group/flannel/HairPin 0.16
379 TestNetworkPlugins/group/bridge/KubeletFlags 0.22
380 TestNetworkPlugins/group/bridge/NetCatPod 10.22
381 TestNetworkPlugins/group/bridge/DNS 0.17
382 TestNetworkPlugins/group/bridge/Localhost 0.13
383 TestNetworkPlugins/group/bridge/HairPin 0.13
x
+
TestDownloadOnly/v1.20.0/json-events (11.45s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-255255 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-255255 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (11.453173757s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (11.45s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-255255
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-255255: exit status 85 (74.083118ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |          |
	|         | -p download-only-255255        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 22:56:09
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 22:56:09.680698   82882 out.go:291] Setting OutFile to fd 1 ...
	I0315 22:56:09.680817   82882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:09.680827   82882 out.go:304] Setting ErrFile to fd 2...
	I0315 22:56:09.680831   82882 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:09.681016   82882 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	W0315 22:56:09.681151   82882 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17991-75602/.minikube/config/config.json: open /home/jenkins/minikube-integration/17991-75602/.minikube/config/config.json: no such file or directory
	I0315 22:56:09.681711   82882 out.go:298] Setting JSON to true
	I0315 22:56:09.682558   82882 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5920,"bootTime":1710537450,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 22:56:09.682632   82882 start.go:139] virtualization: kvm guest
	I0315 22:56:09.685103   82882 out.go:97] [download-only-255255] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 22:56:09.686581   82882 out.go:169] MINIKUBE_LOCATION=17991
	W0315 22:56:09.685242   82882 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball: no such file or directory
	I0315 22:56:09.685298   82882 notify.go:220] Checking for updates...
	I0315 22:56:09.689278   82882 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 22:56:09.690655   82882 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:56:09.692022   82882 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:09.693286   82882 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0315 22:56:09.695644   82882 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 22:56:09.695874   82882 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 22:56:09.729847   82882 out.go:97] Using the kvm2 driver based on user configuration
	I0315 22:56:09.729883   82882 start.go:297] selected driver: kvm2
	I0315 22:56:09.729896   82882 start.go:901] validating driver "kvm2" against <nil>
	I0315 22:56:09.730233   82882 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:09.730296   82882 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 22:56:09.744946   82882 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 22:56:09.744986   82882 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 22:56:09.745488   82882 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0315 22:56:09.745618   82882 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 22:56:09.745673   82882 cni.go:84] Creating CNI manager for ""
	I0315 22:56:09.745687   82882 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:56:09.745694   82882 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 22:56:09.745735   82882 start.go:340] cluster config:
	{Name:download-only-255255 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-255255 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 22:56:09.745882   82882 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:09.747621   82882 out.go:97] Downloading VM boot image ...
	I0315 22:56:09.747662   82882 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/iso/amd64/minikube-v1.32.1-1710520390-17991-amd64.iso
	I0315 22:56:13.222853   82882 out.go:97] Starting "download-only-255255" primary control-plane node in "download-only-255255" cluster
	I0315 22:56:13.222890   82882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 22:56:13.247267   82882 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 22:56:13.247306   82882 cache.go:56] Caching tarball of preloaded images
	I0315 22:56:13.247505   82882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 22:56:13.249439   82882 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0315 22:56:13.249493   82882 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0315 22:56:13.268972   82882 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0315 22:56:18.631538   82882 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0315 22:56:18.631631   82882 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0315 22:56:19.509166   82882 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0315 22:56:19.509584   82882 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/download-only-255255/config.json ...
	I0315 22:56:19.509618   82882 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/download-only-255255/config.json: {Name:mk39d4d63c0abf94319b7153b100ec34a8c760d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:56:19.509776   82882 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0315 22:56:19.509920   82882 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-255255 host does not exist
	  To start a cluster, run: "minikube start -p download-only-255255"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-255255
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (4.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-465986 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-465986 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.189787933s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (4.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-465986
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-465986: exit status 85 (72.339153ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-255255        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-255255        | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | -o=json --download-only        | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-465986        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 22:56:21
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 22:56:21.483831   83051 out.go:291] Setting OutFile to fd 1 ...
	I0315 22:56:21.483989   83051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:21.484001   83051 out.go:304] Setting ErrFile to fd 2...
	I0315 22:56:21.484005   83051 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:21.484236   83051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 22:56:21.484849   83051 out.go:298] Setting JSON to true
	I0315 22:56:21.485695   83051 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5931,"bootTime":1710537450,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 22:56:21.485755   83051 start.go:139] virtualization: kvm guest
	I0315 22:56:21.488124   83051 out.go:97] [download-only-465986] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 22:56:21.489760   83051 out.go:169] MINIKUBE_LOCATION=17991
	I0315 22:56:21.488312   83051 notify.go:220] Checking for updates...
	I0315 22:56:21.492649   83051 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 22:56:21.494106   83051 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:56:21.495515   83051 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:21.496792   83051 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-465986 host does not exist
	  To start a cluster, run: "minikube start -p download-only-465986"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-465986
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (8.43s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-546206 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-546206 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.432765035s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (8.43s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-546206
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-546206: exit status 85 (71.21972ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-255255           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-255255           | download-only-255255 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | -o=json --download-only           | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-465986           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| delete  | -p download-only-465986           | download-only-465986 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC | 15 Mar 24 22:56 UTC |
	| start   | -o=json --download-only           | download-only-546206 | jenkins | v1.32.0 | 15 Mar 24 22:56 UTC |                     |
	|         | -p download-only-546206           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/15 22:56:26
	Running on machine: ubuntu-20-agent-7
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0315 22:56:26.034162   83203 out.go:291] Setting OutFile to fd 1 ...
	I0315 22:56:26.034315   83203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:26.034326   83203 out.go:304] Setting ErrFile to fd 2...
	I0315 22:56:26.034332   83203 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 22:56:26.034522   83203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 22:56:26.035114   83203 out.go:298] Setting JSON to true
	I0315 22:56:26.036058   83203 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":5936,"bootTime":1710537450,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 22:56:26.036127   83203 start.go:139] virtualization: kvm guest
	I0315 22:56:26.038361   83203 out.go:97] [download-only-546206] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 22:56:26.038612   83203 notify.go:220] Checking for updates...
	I0315 22:56:26.039928   83203 out.go:169] MINIKUBE_LOCATION=17991
	I0315 22:56:26.041485   83203 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 22:56:26.043171   83203 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 22:56:26.044767   83203 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 22:56:26.046425   83203 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0315 22:56:26.049173   83203 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0315 22:56:26.049445   83203 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 22:56:26.083596   83203 out.go:97] Using the kvm2 driver based on user configuration
	I0315 22:56:26.083637   83203 start.go:297] selected driver: kvm2
	I0315 22:56:26.083645   83203 start.go:901] validating driver "kvm2" against <nil>
	I0315 22:56:26.084027   83203 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:26.084155   83203 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17991-75602/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0315 22:56:26.099866   83203 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0315 22:56:26.099929   83203 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0315 22:56:26.100407   83203 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0315 22:56:26.100550   83203 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0315 22:56:26.100612   83203 cni.go:84] Creating CNI manager for ""
	I0315 22:56:26.100626   83203 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0315 22:56:26.100633   83203 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0315 22:56:26.100687   83203 start.go:340] cluster config:
	{Name:download-only-546206 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-546206 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 22:56:26.100781   83203 iso.go:125] acquiring lock: {Name:mk679f027f9e50ee78a0d1853e4c249f45d74b45 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0315 22:56:26.102506   83203 out.go:97] Starting "download-only-546206" primary control-plane node in "download-only-546206" cluster
	I0315 22:56:26.102539   83203 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 22:56:26.123461   83203 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0315 22:56:26.123516   83203 cache.go:56] Caching tarball of preloaded images
	I0315 22:56:26.123691   83203 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 22:56:26.125689   83203 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0315 22:56:26.125724   83203 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0315 22:56:26.148577   83203 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0315 22:56:28.505431   83203 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0315 22:56:28.505539   83203 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/17991-75602/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0315 22:56:29.245816   83203 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0315 22:56:29.246187   83203 profile.go:142] Saving config to /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/download-only-546206/config.json ...
	I0315 22:56:29.246221   83203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/download-only-546206/config.json: {Name:mk953cec40351fdb0621eeca9fbc07b1d14b9fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0315 22:56:29.246396   83203 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0315 22:56:29.246539   83203 download.go:107] Downloading: https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.29.0-rc.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/17991-75602/.minikube/cache/linux/amd64/v1.29.0-rc.2/kubectl
	
	
	* The control-plane node download-only-546206 host does not exist
	  To start a cluster, run: "minikube start -p download-only-546206"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-546206
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.57s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-349079 --alsologtostderr --binary-mirror http://127.0.0.1:37207 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-349079" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-349079
--- PASS: TestBinaryMirror (0.57s)

                                                
                                    
x
+
TestOffline (126.6s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-154828 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-154828 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m5.602229512s)
helpers_test.go:175: Cleaning up "offline-crio-154828" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-154828
--- PASS: TestOffline (126.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-097314
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-097314: exit status 85 (66.148257ms)

                                                
                                                
-- stdout --
	* Profile "addons-097314" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-097314"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-097314
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-097314: exit status 85 (65.601816ms)

                                                
                                                
-- stdout --
	* Profile "addons-097314" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-097314"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (143.11s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-097314 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-097314 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m23.105527639s)
--- PASS: TestAddons/Setup (143.11s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 27.731687ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-7bpx6" [f08323c1-5f57-4428-ab07-fa1dd1960c2c] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.006087989s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-bp44p" [03d529e0-4bcd-4fa9-a95b-2921fe26e9cc] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.015034665s
addons_test.go:340: (dbg) Run:  kubectl --context addons-097314 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-097314 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-097314 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.996699957s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 ip
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.88s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-bzqbb" [f316897a-14a4-4d60-a680-3ed2dd3166ee] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.005355909s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-097314
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-097314: (6.126967717s)
--- PASS: TestAddons/parallel/InspektorGadget (11.13s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (14.73s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.252122ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-5s4t7" [159edcb2-34c6-484f-b9c1-7b4d9f4cc492] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007791914s
addons_test.go:473: (dbg) Run:  kubectl --context addons-097314 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-097314 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.04345559s)
addons_test.go:478: kubectl --context addons-097314 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:473: (dbg) Run:  kubectl --context addons-097314 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-097314 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (2.750732853s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (14.73s)

                                                
                                    
x
+
TestAddons/parallel/CSI (85.7s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 28.586014ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-097314 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/03/15 22:59:13 [DEBUG] GET http://192.168.39.35:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-097314 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [105fa486-5df0-4e5b-8db3-0a81f02da830] Pending
helpers_test.go:344: "task-pv-pod" [105fa486-5df0-4e5b-8db3-0a81f02da830] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [105fa486-5df0-4e5b-8db3-0a81f02da830] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 21.003815278s
addons_test.go:584: (dbg) Run:  kubectl --context addons-097314 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-097314 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-097314 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-097314 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-097314 delete pod task-pv-pod: (1.039275635s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-097314 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-097314 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-097314 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [c03d83fb-608a-46bd-8b9a-20e0fb339b42] Pending
helpers_test.go:344: "task-pv-pod-restore" [c03d83fb-608a-46bd-8b9a-20e0fb339b42] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [c03d83fb-608a-46bd-8b9a-20e0fb339b42] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.004660816s
addons_test.go:626: (dbg) Run:  kubectl --context addons-097314 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-097314 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-097314 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-097314 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.869196218s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (85.70s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (14.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-097314 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-097314 --alsologtostderr -v=1: (1.535089967s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-6q49r" [1830ee4b-4662-4561-b106-65e211f79e01] Pending
helpers_test.go:344: "headlamp-5485c556b-6q49r" [1830ee4b-4662-4561-b106-65e211f79e01] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-6q49r" [1830ee4b-4662-4561-b106-65e211f79e01] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004358993s
--- PASS: TestAddons/parallel/Headlamp (14.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-mb9fc" [9e19dae8-0109-477d-a76f-5805ec456869] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003978392s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-097314
--- PASS: TestAddons/parallel/CloudSpanner (5.56s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (55.62s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-097314 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-097314 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [c6de3eb2-69e5-4094-9386-99d119d40432] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [c6de3eb2-69e5-4094-9386-99d119d40432] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [c6de3eb2-69e5-4094-9386-99d119d40432] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.004654385s
addons_test.go:891: (dbg) Run:  kubectl --context addons-097314 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 ssh "cat /opt/local-path-provisioner/pvc-c163d35d-fa3b-40ab-b865-3fb0f205250a_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-097314 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-097314 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-097314 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-097314 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.74393654s)
--- PASS: TestAddons/parallel/LocalPath (55.62s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (7.02s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-gpjp2" [2e033f82-a2e7-42b2-9052-980b0046daa3] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005333814s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-097314
addons_test.go:955: (dbg) Done: out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-097314: (1.011386505s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (7.02s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-rb286" [e8f245e4-2a42-4c1e-bc01-a560ebc55844] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.006021427s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-097314 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-097314 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (59.07s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-313368 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-313368 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (57.574781424s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-313368 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-313368 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-313368 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-313368" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-313368
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-313368: (1.039892126s)
--- PASS: TestCertOptions (59.07s)

                                                
                                    
x
+
TestCertExpiration (291.06s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-982877 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0316 00:03:58.905644   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0316 00:04:08.402823   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-982877 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m9.303812775s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-982877 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-982877 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.740116964s)
helpers_test.go:175: Cleaning up "cert-expiration-982877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-982877
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-982877: (1.018106648s)
--- PASS: TestCertExpiration (291.06s)

                                                
                                    
x
+
TestForceSystemdFlag (77.98s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-844359 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-844359 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m16.644095735s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-844359 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-844359" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-844359
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-844359: (1.084589263s)
--- PASS: TestForceSystemdFlag (77.98s)

                                                
                                    
x
+
TestForceSystemdEnv (49.74s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-380757 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-380757 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (48.702278972s)
helpers_test.go:175: Cleaning up "force-systemd-env-380757" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-380757
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-380757: (1.032751707s)
--- PASS: TestForceSystemdEnv (49.74s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.51s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
E0316 00:03:51.448440   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
--- PASS: TestKVMDriverInstallOrUpdate (5.51s)

                                                
                                    
x
+
TestErrorSpam/setup (43.93s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-923918 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-923918 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-923918 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-923918 --driver=kvm2  --container-runtime=crio: (43.926986175s)
--- PASS: TestErrorSpam/setup (43.93s)

                                                
                                    
x
+
TestErrorSpam/start (0.38s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 start --dry-run
--- PASS: TestErrorSpam/start (0.38s)

                                                
                                    
x
+
TestErrorSpam/status (0.77s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 status
--- PASS: TestErrorSpam/status (0.77s)

                                                
                                    
x
+
TestErrorSpam/pause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 pause
--- PASS: TestErrorSpam/pause (1.64s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 unpause
--- PASS: TestErrorSpam/unpause (1.72s)

                                                
                                    
x
+
TestErrorSpam/stop (5.77s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 stop: (2.299998028s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 stop: (2.077564867s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-923918 --log_dir /tmp/nospam-923918 stop: (1.392765972s)
--- PASS: TestErrorSpam/stop (5.77s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17991-75602/.minikube/files/etc/test/nested/copy/82870/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (96.01s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332624 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-332624 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m36.012504252s)
--- PASS: TestFunctional/serial/StartWithProxy (96.01s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.31s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332624 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-332624 --alsologtostderr -v=8: (41.312471435s)
functional_test.go:659: soft start took 41.313242477s for "functional-332624" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.31s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-332624 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 cache add registry.k8s.io/pause:3.3: (1.142435949s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 cache add registry.k8s.io/pause:latest: (1.016912075s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-332624 /tmp/TestFunctionalserialCacheCmdcacheadd_local2114880252/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cache add minikube-local-cache-test:functional-332624
functional_test.go:1085: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 cache add minikube-local-cache-test:functional-332624: (1.576292327s)
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cache delete minikube-local-cache-test:functional-332624
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-332624
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (233.118333ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 kubectl -- --context functional-332624 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-332624 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.27s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332624 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0315 23:08:58.906398   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:58.912170   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:58.922453   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:58.942739   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:58.983046   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:59.063398   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:59.223881   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:08:59.544493   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:09:00.185520   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-332624 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.265875318s)
functional_test.go:757: restart took 33.265970631s for "functional-332624" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (33.27s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-332624 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 logs
E0315 23:09:01.466077   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 logs: (1.526450543s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 logs --file /tmp/TestFunctionalserialLogsFileCmd3242007405/001/logs.txt
E0315 23:09:04.027452   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 logs --file /tmp/TestFunctionalserialLogsFileCmd3242007405/001/logs.txt: (1.540268658s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.54s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.97s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-332624 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-332624
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-332624: exit status 115 (279.676102ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.209:31119 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-332624 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.97s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 config get cpus: exit status 14 (66.73158ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 config get cpus: exit status 14 (69.049671ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-332624 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-332624 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 91254: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332624 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-332624 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (141.913713ms)

                                                
                                                
-- stdout --
	* [functional-332624] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:09:41.546417   91063 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:09:41.546698   91063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:41.546712   91063 out.go:304] Setting ErrFile to fd 2...
	I0315 23:09:41.546759   91063 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:41.547236   91063 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:09:41.548120   91063 out.go:298] Setting JSON to false
	I0315 23:09:41.548982   91063 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6732,"bootTime":1710537450,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:09:41.549049   91063 start.go:139] virtualization: kvm guest
	I0315 23:09:41.550832   91063 out.go:177] * [functional-332624] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0315 23:09:41.552635   91063 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:09:41.553904   91063 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:09:41.552643   91063 notify.go:220] Checking for updates...
	I0315 23:09:41.556401   91063 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:09:41.557725   91063 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:41.558951   91063 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:09:41.560182   91063 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:09:41.561704   91063 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:09:41.562084   91063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:09:41.562145   91063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:09:41.576751   91063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46135
	I0315 23:09:41.577197   91063 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:09:41.577726   91063 main.go:141] libmachine: Using API Version  1
	I0315 23:09:41.577748   91063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:09:41.578109   91063 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:09:41.578318   91063 main.go:141] libmachine: (functional-332624) Calling .DriverName
	I0315 23:09:41.578597   91063 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:09:41.578930   91063 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:09:41.578992   91063 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:09:41.593502   91063 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41449
	I0315 23:09:41.593894   91063 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:09:41.594317   91063 main.go:141] libmachine: Using API Version  1
	I0315 23:09:41.594359   91063 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:09:41.594681   91063 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:09:41.594911   91063 main.go:141] libmachine: (functional-332624) Calling .DriverName
	I0315 23:09:41.627628   91063 out.go:177] * Using the kvm2 driver based on existing profile
	I0315 23:09:41.628959   91063 start.go:297] selected driver: kvm2
	I0315 23:09:41.628981   91063 start.go:901] validating driver "kvm2" against &{Name:functional-332624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-332624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:09:41.629116   91063 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:09:41.631294   91063 out.go:177] 
	W0315 23:09:41.632749   91063 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0315 23:09:41.634013   91063 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332624 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-332624 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-332624 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (158.186473ms)

                                                
                                                
-- stdout --
	* [functional-332624] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:09:23.144690   90538 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:09:23.144955   90538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:23.144963   90538 out.go:304] Setting ErrFile to fd 2...
	I0315 23:09:23.144968   90538 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:09:23.145264   90538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:09:23.145769   90538 out.go:298] Setting JSON to false
	I0315 23:09:23.146707   90538 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":6713,"bootTime":1710537450,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0315 23:09:23.146779   90538 start.go:139] virtualization: kvm guest
	I0315 23:09:23.149226   90538 out.go:177] * [functional-332624] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0315 23:09:23.150744   90538 out.go:177]   - MINIKUBE_LOCATION=17991
	I0315 23:09:23.152134   90538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0315 23:09:23.150755   90538 notify.go:220] Checking for updates...
	I0315 23:09:23.153580   90538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0315 23:09:23.154980   90538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0315 23:09:23.156280   90538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0315 23:09:23.157693   90538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0315 23:09:23.159651   90538 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:09:23.160253   90538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:09:23.160312   90538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:09:23.175598   90538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40545
	I0315 23:09:23.176105   90538 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:09:23.176749   90538 main.go:141] libmachine: Using API Version  1
	I0315 23:09:23.176775   90538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:09:23.177168   90538 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:09:23.177381   90538 main.go:141] libmachine: (functional-332624) Calling .DriverName
	I0315 23:09:23.177669   90538 driver.go:392] Setting default libvirt URI to qemu:///system
	I0315 23:09:23.177980   90538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:09:23.178028   90538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:09:23.192475   90538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43817
	I0315 23:09:23.192954   90538 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:09:23.193577   90538 main.go:141] libmachine: Using API Version  1
	I0315 23:09:23.193616   90538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:09:23.193938   90538 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:09:23.194143   90538 main.go:141] libmachine: (functional-332624) Calling .DriverName
	I0315 23:09:23.227473   90538 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0315 23:09:23.228763   90538 start.go:297] selected driver: kvm2
	I0315 23:09:23.228784   90538 start.go:901] validating driver "kvm2" against &{Name:functional-332624 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17991/minikube-v1.32.1-1710520390-17991-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-332624 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.209 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0315 23:09:23.228948   90538 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0315 23:09:23.231219   90538 out.go:177] 
	W0315 23:09:23.232511   90538 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0315 23:09:23.233859   90538 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.84s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-332624 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-332624 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-z946h" [79b33894-1129-4079-b5c9-9e7a059b87c8] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-z946h" [79b33894-1129-4079-b5c9-9e7a059b87c8] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.005426675s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.209:31146
functional_test.go:1671: http://192.168.39.209:31146: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-z946h

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.209:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.209:31146
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.71s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [0a490438-6f11-411d-a974-dc3e697c80c7] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004688876s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-332624 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-332624 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-332624 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-332624 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-332624 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [06f88786-b373-4fe1-bedc-98737a0a5abf] Pending
helpers_test.go:344: "sp-pod" [06f88786-b373-4fe1-bedc-98737a0a5abf] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [06f88786-b373-4fe1-bedc-98737a0a5abf] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 16.005567831s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-332624 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-332624 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-332624 delete -f testdata/storage-provisioner/pod.yaml: (2.813338369s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-332624 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [e1264966-b043-4664-b271-b09ec5432060] Pending
helpers_test.go:344: "sp-pod" [e1264966-b043-4664-b271-b09ec5432060] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [e1264966-b043-4664-b271-b09ec5432060] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004671654s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-332624 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh -n functional-332624 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cp functional-332624:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4126028525/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh -n functional-332624 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh -n functional-332624 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.65s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (30.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-332624 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-5tzjp" [01df5a7e-fc9f-4426-a0c8-014cacd740ac] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-5tzjp" [01df5a7e-fc9f-4426-a0c8-014cacd740ac] Running
E0315 23:09:39.868597   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.006150732s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;": exit status 1 (208.802802ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;": exit status 1 (149.304025ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;": exit status 1 (282.235755ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-332624 exec mysql-859648c796-5tzjp -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (30.39s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/82870/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /etc/test/nested/copy/82870/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/82870.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /etc/ssl/certs/82870.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/82870.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /usr/share/ca-certificates/82870.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/828702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /etc/ssl/certs/828702.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/828702.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /usr/share/ca-certificates/828702.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-332624 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh "sudo systemctl is-active docker": exit status 1 (215.078312ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh "sudo systemctl is-active containerd": exit status 1 (223.62569ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "275.030073ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "66.485299ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-332624 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-332624 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-p7tjg" [b25d05f2-09ab-4c9d-be57-82fb2992b39e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-p7tjg" [b25d05f2-09ab-4c9d-be57-82fb2992b39e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.00782811s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "285.391648ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
E0315 23:09:09.147800   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
functional_test.go:1375: Took "57.758142ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.88s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332624 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| localhost/minikube-local-cache-test     | functional-332624  | 00da779e19119 | 3.35kB |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/google-containers/addon-resizer  | functional-332624  | ffd4cfbbe753e | 34.1MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332624 image ls --format table --alsologtostderr:
I0315 23:09:51.349952   91910 out.go:291] Setting OutFile to fd 1 ...
I0315 23:09:51.350584   91910 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:51.350646   91910 out.go:304] Setting ErrFile to fd 2...
I0315 23:09:51.350664   91910 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:51.351124   91910 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
I0315 23:09:51.352254   91910 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:51.352447   91910 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:51.352873   91910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:51.352920   91910 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:51.367953   91910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
I0315 23:09:51.368418   91910 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:51.369063   91910 main.go:141] libmachine: Using API Version  1
I0315 23:09:51.369084   91910 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:51.369520   91910 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:51.369732   91910 main.go:141] libmachine: (functional-332624) Calling .GetState
I0315 23:09:51.371887   91910 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:51.371935   91910 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:51.386916   91910 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35223
I0315 23:09:51.387445   91910 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:51.388016   91910 main.go:141] libmachine: Using API Version  1
I0315 23:09:51.388044   91910 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:51.388382   91910 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:51.388589   91910 main.go:141] libmachine: (functional-332624) Calling .DriverName
I0315 23:09:51.388817   91910 ssh_runner.go:195] Run: systemctl --version
I0315 23:09:51.388847   91910 main.go:141] libmachine: (functional-332624) Calling .GetSSHHostname
I0315 23:09:51.391591   91910 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:51.391931   91910 main.go:141] libmachine: (functional-332624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:96:04", ip: ""} in network mk-functional-332624: {Iface:virbr1 ExpiryTime:2024-03-16 00:06:17 +0000 UTC Type:0 Mac:52:54:00:6b:96:04 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-332624 Clientid:01:52:54:00:6b:96:04}
I0315 23:09:51.391973   91910 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined IP address 192.168.39.209 and MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:51.392130   91910 main.go:141] libmachine: (functional-332624) Calling .GetSSHPort
I0315 23:09:51.392309   91910 main.go:141] libmachine: (functional-332624) Calling .GetSSHKeyPath
I0315 23:09:51.392457   91910 main.go:141] libmachine: (functional-332624) Calling .GetSSHUsername
I0315 23:09:51.392617   91910 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/functional-332624/id_rsa Username:docker}
I0315 23:09:51.499000   91910 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 23:09:51.546025   91910 main.go:141] libmachine: Making call to close driver server
I0315 23:09:51.546052   91910 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:51.546334   91910 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:51.546356   91910 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:51.546368   91910 main.go:141] libmachine: Making call to close driver server
I0315 23:09:51.546366   91910 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
I0315 23:09:51.546377   91910 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:51.546662   91910 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:51.546664   91910 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
I0315 23:09:51.546687   91910 main.go:141] libmachine: Making call to close connection to plugin binary
2024/03/15 23:09:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332624 image ls --format json --alsologtostderr:
[{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153f
b0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-332624"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@s
ha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"3147052
4"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88
d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"00da779e19119154cf9928a407872c5f1a513a1c1402a1dd0885e69481d09429","repoDigests":["localhost/minikube-local-cache-test@
sha256:e26253f2e1f8dde447cd9077d3d902fcac7380cc6a23ccd08a8c882d14475366"],"repoTags":["localhost/minikube-local-cache-test:functional-332624"],"size":"3345"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332624 image ls --format json --alsologtostderr:
I0315 23:09:51.002727   91842 out.go:291] Setting OutFile to fd 1 ...
I0315 23:09:51.002844   91842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:51.002852   91842 out.go:304] Setting ErrFile to fd 2...
I0315 23:09:51.002857   91842 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:51.003086   91842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
I0315 23:09:51.003722   91842 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:51.003858   91842 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:51.004287   91842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:51.004347   91842 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:51.020218   91842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37265
I0315 23:09:51.020669   91842 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:51.021261   91842 main.go:141] libmachine: Using API Version  1
I0315 23:09:51.021285   91842 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:51.021734   91842 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:51.021962   91842 main.go:141] libmachine: (functional-332624) Calling .GetState
I0315 23:09:51.023894   91842 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:51.023937   91842 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:51.039192   91842 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44613
I0315 23:09:51.039582   91842 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:51.040065   91842 main.go:141] libmachine: Using API Version  1
I0315 23:09:51.040093   91842 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:51.040413   91842 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:51.040600   91842 main.go:141] libmachine: (functional-332624) Calling .DriverName
I0315 23:09:51.040819   91842 ssh_runner.go:195] Run: systemctl --version
I0315 23:09:51.040846   91842 main.go:141] libmachine: (functional-332624) Calling .GetSSHHostname
I0315 23:09:51.043105   91842 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:51.043476   91842 main.go:141] libmachine: (functional-332624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:96:04", ip: ""} in network mk-functional-332624: {Iface:virbr1 ExpiryTime:2024-03-16 00:06:17 +0000 UTC Type:0 Mac:52:54:00:6b:96:04 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-332624 Clientid:01:52:54:00:6b:96:04}
I0315 23:09:51.043505   91842 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined IP address 192.168.39.209 and MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:51.043637   91842 main.go:141] libmachine: (functional-332624) Calling .GetSSHPort
I0315 23:09:51.043826   91842 main.go:141] libmachine: (functional-332624) Calling .GetSSHKeyPath
I0315 23:09:51.044032   91842 main.go:141] libmachine: (functional-332624) Calling .GetSSHUsername
I0315 23:09:51.044179   91842 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/functional-332624/id_rsa Username:docker}
I0315 23:09:51.157618   91842 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 23:09:51.273263   91842 main.go:141] libmachine: Making call to close driver server
I0315 23:09:51.273281   91842 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:51.273591   91842 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:51.273614   91842 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:51.273630   91842 main.go:141] libmachine: Making call to close driver server
I0315 23:09:51.273648   91842 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:51.275385   91842 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:51.275409   91842 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:51.275435   91842 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image ls --format yaml --alsologtostderr: (2.167401255s)
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332624 image ls --format yaml --alsologtostderr:
- id: 00da779e19119154cf9928a407872c5f1a513a1c1402a1dd0885e69481d09429
repoDigests:
- localhost/minikube-local-cache-test@sha256:e26253f2e1f8dde447cd9077d3d902fcac7380cc6a23ccd08a8c882d14475366
repoTags:
- localhost/minikube-local-cache-test:functional-332624
size: "3345"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-332624
size: "34114467"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332624 image ls --format yaml --alsologtostderr:
I0315 23:09:48.827734   91788 out.go:291] Setting OutFile to fd 1 ...
I0315 23:09:48.827860   91788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:48.827870   91788 out.go:304] Setting ErrFile to fd 2...
I0315 23:09:48.827875   91788 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:48.828089   91788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
I0315 23:09:48.828711   91788 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:48.828835   91788 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:48.829240   91788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:48.829292   91788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:48.844723   91788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40351
I0315 23:09:48.845237   91788 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:48.845841   91788 main.go:141] libmachine: Using API Version  1
I0315 23:09:48.845861   91788 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:48.846217   91788 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:48.846419   91788 main.go:141] libmachine: (functional-332624) Calling .GetState
I0315 23:09:48.848403   91788 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:48.848471   91788 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:48.863943   91788 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40523
I0315 23:09:48.871502   91788 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:48.873328   91788 main.go:141] libmachine: Using API Version  1
I0315 23:09:48.873362   91788 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:48.873840   91788 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:48.874057   91788 main.go:141] libmachine: (functional-332624) Calling .DriverName
I0315 23:09:48.874304   91788 ssh_runner.go:195] Run: systemctl --version
I0315 23:09:48.874336   91788 main.go:141] libmachine: (functional-332624) Calling .GetSSHHostname
I0315 23:09:48.877638   91788 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:48.878067   91788 main.go:141] libmachine: (functional-332624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:96:04", ip: ""} in network mk-functional-332624: {Iface:virbr1 ExpiryTime:2024-03-16 00:06:17 +0000 UTC Type:0 Mac:52:54:00:6b:96:04 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-332624 Clientid:01:52:54:00:6b:96:04}
I0315 23:09:48.878095   91788 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined IP address 192.168.39.209 and MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:48.878287   91788 main.go:141] libmachine: (functional-332624) Calling .GetSSHPort
I0315 23:09:48.878511   91788 main.go:141] libmachine: (functional-332624) Calling .GetSSHKeyPath
I0315 23:09:48.878658   91788 main.go:141] libmachine: (functional-332624) Calling .GetSSHUsername
I0315 23:09:48.878809   91788 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/functional-332624/id_rsa Username:docker}
I0315 23:09:48.982172   91788 ssh_runner.go:195] Run: sudo crictl images --output json
I0315 23:09:50.932548   91788 ssh_runner.go:235] Completed: sudo crictl images --output json: (1.95032565s)
I0315 23:09:50.933558   91788 main.go:141] libmachine: Making call to close driver server
I0315 23:09:50.933575   91788 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:50.933846   91788 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:50.933861   91788 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:50.933868   91788 main.go:141] libmachine: Making call to close driver server
I0315 23:09:50.933995   91788 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:50.933927   91788 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
I0315 23:09:50.934225   91788 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:50.934244   91788 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:50.934244   91788 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh pgrep buildkitd: exit status 1 (232.09064ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image build -t localhost/my-image:functional-332624 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image build -t localhost/my-image:functional-332624 testdata/build --alsologtostderr: (3.095937346s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-332624 image build -t localhost/my-image:functional-332624 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> beeb6c3eec5
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-332624
--> 76ea98d2dd8
Successfully tagged localhost/my-image:functional-332624
76ea98d2dd8e391eb125b2da8fa7bc391bc6caad4c198d14a08ef64e63cd9ddb
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-332624 image build -t localhost/my-image:functional-332624 testdata/build --alsologtostderr:
I0315 23:09:51.148458   91876 out.go:291] Setting OutFile to fd 1 ...
I0315 23:09:51.148717   91876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:51.148730   91876 out.go:304] Setting ErrFile to fd 2...
I0315 23:09:51.148734   91876 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0315 23:09:51.148930   91876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
I0315 23:09:51.149513   91876 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:51.150177   91876 config.go:182] Loaded profile config "functional-332624": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0315 23:09:51.150830   91876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:51.150912   91876 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:51.166815   91876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43849
I0315 23:09:51.167379   91876 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:51.168024   91876 main.go:141] libmachine: Using API Version  1
I0315 23:09:51.168055   91876 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:51.168454   91876 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:51.168703   91876 main.go:141] libmachine: (functional-332624) Calling .GetState
I0315 23:09:51.170754   91876 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0315 23:09:51.170813   91876 main.go:141] libmachine: Launching plugin server for driver kvm2
I0315 23:09:51.186030   91876 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43395
I0315 23:09:51.186543   91876 main.go:141] libmachine: () Calling .GetVersion
I0315 23:09:51.187238   91876 main.go:141] libmachine: Using API Version  1
I0315 23:09:51.187269   91876 main.go:141] libmachine: () Calling .SetConfigRaw
I0315 23:09:51.187602   91876 main.go:141] libmachine: () Calling .GetMachineName
I0315 23:09:51.187801   91876 main.go:141] libmachine: (functional-332624) Calling .DriverName
I0315 23:09:51.188022   91876 ssh_runner.go:195] Run: systemctl --version
I0315 23:09:51.188045   91876 main.go:141] libmachine: (functional-332624) Calling .GetSSHHostname
I0315 23:09:51.190564   91876 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:51.190958   91876 main.go:141] libmachine: (functional-332624) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6b:96:04", ip: ""} in network mk-functional-332624: {Iface:virbr1 ExpiryTime:2024-03-16 00:06:17 +0000 UTC Type:0 Mac:52:54:00:6b:96:04 Iaid: IPaddr:192.168.39.209 Prefix:24 Hostname:functional-332624 Clientid:01:52:54:00:6b:96:04}
I0315 23:09:51.190986   91876 main.go:141] libmachine: (functional-332624) DBG | domain functional-332624 has defined IP address 192.168.39.209 and MAC address 52:54:00:6b:96:04 in network mk-functional-332624
I0315 23:09:51.191121   91876 main.go:141] libmachine: (functional-332624) Calling .GetSSHPort
I0315 23:09:51.191285   91876 main.go:141] libmachine: (functional-332624) Calling .GetSSHKeyPath
I0315 23:09:51.191442   91876 main.go:141] libmachine: (functional-332624) Calling .GetSSHUsername
I0315 23:09:51.191602   91876 sshutil.go:53] new ssh client: &{IP:192.168.39.209 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/functional-332624/id_rsa Username:docker}
I0315 23:09:51.355888   91876 build_images.go:161] Building image from path: /tmp/build.2861433197.tar
I0315 23:09:51.355963   91876 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0315 23:09:51.378827   91876 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2861433197.tar
I0315 23:09:51.384437   91876 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2861433197.tar: stat -c "%s %y" /var/lib/minikube/build/build.2861433197.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2861433197.tar': No such file or directory
I0315 23:09:51.384488   91876 ssh_runner.go:362] scp /tmp/build.2861433197.tar --> /var/lib/minikube/build/build.2861433197.tar (3072 bytes)
I0315 23:09:51.437430   91876 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2861433197
I0315 23:09:51.453703   91876 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2861433197 -xf /var/lib/minikube/build/build.2861433197.tar
I0315 23:09:51.462851   91876 crio.go:297] Building image: /var/lib/minikube/build/build.2861433197
I0315 23:09:51.462915   91876 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-332624 /var/lib/minikube/build/build.2861433197 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0315 23:09:54.140590   91876 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-332624 /var/lib/minikube/build/build.2861433197 --cgroup-manager=cgroupfs: (2.677633725s)
I0315 23:09:54.140692   91876 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2861433197
I0315 23:09:54.165221   91876 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2861433197.tar
I0315 23:09:54.180721   91876 build_images.go:217] Built localhost/my-image:functional-332624 from /tmp/build.2861433197.tar
I0315 23:09:54.180759   91876 build_images.go:133] succeeded building to: functional-332624
I0315 23:09:54.180765   91876 build_images.go:134] failed building to: 
I0315 23:09:54.180799   91876 main.go:141] libmachine: Making call to close driver server
I0315 23:09:54.180816   91876 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:54.181094   91876 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:54.181117   91876 main.go:141] libmachine: Making call to close connection to plugin binary
I0315 23:09:54.181148   91876 main.go:141] libmachine: (functional-332624) DBG | Closing plugin on server side
I0315 23:09:54.181156   91876 main.go:141] libmachine: Making call to close driver server
I0315 23:09:54.181168   91876 main.go:141] libmachine: (functional-332624) Calling .Close
I0315 23:09:54.181391   91876 main.go:141] libmachine: Successfully made call to close driver server
I0315 23:09:54.181406   91876 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.354005679s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-332624
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image load --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image load --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr: (3.967018366s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image load --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image load --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr: (2.419266197s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.204756619s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-332624
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image load --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image load --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr: (6.868688895s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (8.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 service list -o json
E0315 23:09:19.388400   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
functional_test.go:1490: Took "451.331253ms" to run "out/minikube-linux-amd64 -p functional-332624 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.209:31241
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.209:31241
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (20.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdany-port96379360/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710544163240421847" to /tmp/TestFunctionalparallelMountCmdany-port96379360/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710544163240421847" to /tmp/TestFunctionalparallelMountCmdany-port96379360/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710544163240421847" to /tmp/TestFunctionalparallelMountCmdany-port96379360/001/test-1710544163240421847
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (227.611139ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 15 23:09 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 15 23:09 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 15 23:09 test-1710544163240421847
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh cat /mount-9p/test-1710544163240421847
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-332624 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [22e87df4-81e0-44c4-80f9-d46544879ff2] Pending
helpers_test.go:344: "busybox-mount" [22e87df4-81e0-44c4-80f9-d46544879ff2] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [22e87df4-81e0-44c4-80f9-d46544879ff2] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [22e87df4-81e0-44c4-80f9-d46544879ff2] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 18.005621892s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-332624 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdany-port96379360/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (20.69s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image save gcr.io/google-containers/addon-resizer:functional-332624 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image save gcr.io/google-containers/addon-resizer:functional-332624 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.51471204s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (1.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image rm gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image rm gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr: (1.437971923s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (1.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (8.986344145s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-332624
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 image save --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-332624 image save --daemon gcr.io/google-containers/addon-resizer:functional-332624 --alsologtostderr: (1.827768923s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-332624
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdspecific-port3597921250/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (232.55389ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdspecific-port3597921250/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh "sudo umount -f /mount-9p": exit status 1 (197.865662ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-332624 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdspecific-port3597921250/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282973653/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282973653/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282973653/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T" /mount1: exit status 1 (221.861037ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-332624 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-332624 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282973653/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282973653/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-332624 /tmp/TestFunctionalparallelMountCmdVerifyCleanup4282973653/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.40s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-332624
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-332624
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-332624
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (220.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-285481 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 23:10:20.829285   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:11:42.749744   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-285481 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m39.76857501s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (220.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-285481 -- rollout status deployment/busybox: (3.19676173s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-cc7rx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-klvd7 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-tgxps -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-cc7rx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-klvd7 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-tgxps -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-cc7rx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-klvd7 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-tgxps -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-cc7rx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-cc7rx -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-klvd7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-klvd7 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-tgxps -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-285481 -- exec busybox-5b5d89c9d6-tgxps -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-285481 -v=7 --alsologtostderr
E0315 23:13:58.906251   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:14:08.401963   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:08.407286   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:08.417567   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:08.437860   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:08.478183   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:08.558492   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:08.718944   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:09.039639   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:09.680578   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:10.961716   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:13.522725   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:18.643675   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:14:26.590269   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:14:28.884451   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-285481 -v=7 --alsologtostderr: (48.035059762s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-285481 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp testdata/cp-test.txt ha-285481:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481:/home/docker/cp-test.txt ha-285481-m02:/home/docker/cp-test_ha-285481_ha-285481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test_ha-285481_ha-285481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481:/home/docker/cp-test.txt ha-285481-m03:/home/docker/cp-test_ha-285481_ha-285481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test_ha-285481_ha-285481-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481:/home/docker/cp-test.txt ha-285481-m04:/home/docker/cp-test_ha-285481_ha-285481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test_ha-285481_ha-285481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp testdata/cp-test.txt ha-285481-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m02:/home/docker/cp-test.txt ha-285481:/home/docker/cp-test_ha-285481-m02_ha-285481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test_ha-285481-m02_ha-285481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m02:/home/docker/cp-test.txt ha-285481-m03:/home/docker/cp-test_ha-285481-m02_ha-285481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test_ha-285481-m02_ha-285481-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m02:/home/docker/cp-test.txt ha-285481-m04:/home/docker/cp-test_ha-285481-m02_ha-285481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test_ha-285481-m02_ha-285481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp testdata/cp-test.txt ha-285481-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt ha-285481:/home/docker/cp-test_ha-285481-m03_ha-285481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test_ha-285481-m03_ha-285481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt ha-285481-m02:/home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test_ha-285481-m03_ha-285481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m03:/home/docker/cp-test.txt ha-285481-m04:/home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test_ha-285481-m03_ha-285481-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp testdata/cp-test.txt ha-285481-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3908390656/001/cp-test_ha-285481-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt ha-285481:/home/docker/cp-test_ha-285481-m04_ha-285481.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481 "sudo cat /home/docker/cp-test_ha-285481-m04_ha-285481.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt ha-285481-m02:/home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m02 "sudo cat /home/docker/cp-test_ha-285481-m04_ha-285481-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 cp ha-285481-m04:/home/docker/cp-test.txt ha-285481-m03:/home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 ssh -n ha-285481-m03 "sudo cat /home/docker/cp-test_ha-285481-m04_ha-285481-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.497454977s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (17.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-285481 node delete m03 -v=7 --alsologtostderr: (16.823078284s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (17.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (349.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-285481 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 23:28:58.906504   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:29:08.403118   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0315 23:30:31.447250   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-285481 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m48.695100527s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (349.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (73.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-285481 --control-plane -v=7 --alsologtostderr
E0315 23:33:58.905551   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-285481 --control-plane -v=7 --alsologtostderr: (1m13.015209137s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-285481 status -v=7 --alsologtostderr
E0315 23:34:08.402054   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (73.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.59s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-745700 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-745700 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.121827927s)
--- PASS: TestJSONOutput/start/Command (99.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-745700 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.67s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-745700 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.67s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-745700 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-745700 --output=json --user=testUser: (7.450607186s)
--- PASS: TestJSONOutput/stop/Command (7.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-241188 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-241188 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.327ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"512db322-a6f1-44f2-bf86-4b894e5434ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-241188] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"1a0428b2-12b0-42af-a985-924965cba251","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17991"}}
	{"specversion":"1.0","id":"fbe69854-a10c-461e-989a-72ca1ede85e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"03f50ad6-f7f5-4911-98ab-88234e876b0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig"}}
	{"specversion":"1.0","id":"3b94d385-d0e3-452d-9bb6-2483707d938e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube"}}
	{"specversion":"1.0","id":"fd40018b-5635-49b8-adbb-359df490c4c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8c37e464-e8c1-4217-8227-43797502b93f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"69e37b99-6d71-4fb9-9b22-b1d92de3dab6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-241188" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-241188
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (90.54s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-374625 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-374625 --driver=kvm2  --container-runtime=crio: (43.237588871s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-376906 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-376906 --driver=kvm2  --container-runtime=crio: (44.54840558s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-374625
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-376906
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-376906" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-376906
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-376906: (1.003340423s)
helpers_test.go:175: Cleaning up "first-374625" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-374625
--- PASS: TestMinikubeProfile (90.54s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.24s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-472059 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-472059 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.241278649s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.24s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-472059 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-472059 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (27.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-487958 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-487958 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.141223517s)
--- PASS: TestMountStart/serial/StartWithMountSecond (27.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-487958 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-487958 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.39s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-472059 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-487958 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-487958 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-487958
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-487958: (1.338212133s)
--- PASS: TestMountStart/serial/Stop (1.34s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (23.91s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-487958
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-487958: (22.906489136s)
--- PASS: TestMountStart/serial/RestartStopped (23.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-487958 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-487958 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-658614 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0315 23:38:58.905971   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:39:08.402264   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-658614 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.515442007s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-658614 -- rollout status deployment/busybox: (3.309806938s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-92n6k -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-r8z86 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-92n6k -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-r8z86 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-92n6k -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-r8z86 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.06s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-92n6k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-92n6k -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-r8z86 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-658614 -- exec busybox-5b5d89c9d6-r8z86 -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.91s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-658614 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-658614 -v 3 --alsologtostderr: (40.727737948s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-658614 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp testdata/cp-test.txt multinode-658614:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2872696795/001/cp-test_multinode-658614.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614:/home/docker/cp-test.txt multinode-658614-m02:/home/docker/cp-test_multinode-658614_multinode-658614-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m02 "sudo cat /home/docker/cp-test_multinode-658614_multinode-658614-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614:/home/docker/cp-test.txt multinode-658614-m03:/home/docker/cp-test_multinode-658614_multinode-658614-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m03 "sudo cat /home/docker/cp-test_multinode-658614_multinode-658614-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp testdata/cp-test.txt multinode-658614-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2872696795/001/cp-test_multinode-658614-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt multinode-658614:/home/docker/cp-test_multinode-658614-m02_multinode-658614.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614 "sudo cat /home/docker/cp-test_multinode-658614-m02_multinode-658614.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614-m02:/home/docker/cp-test.txt multinode-658614-m03:/home/docker/cp-test_multinode-658614-m02_multinode-658614-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m03 "sudo cat /home/docker/cp-test_multinode-658614-m02_multinode-658614-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp testdata/cp-test.txt multinode-658614-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2872696795/001/cp-test_multinode-658614-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt multinode-658614:/home/docker/cp-test_multinode-658614-m03_multinode-658614.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614 "sudo cat /home/docker/cp-test_multinode-658614-m03_multinode-658614.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 cp multinode-658614-m03:/home/docker/cp-test.txt multinode-658614-m02:/home/docker/cp-test_multinode-658614-m03_multinode-658614-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 ssh -n multinode-658614-m02 "sudo cat /home/docker/cp-test_multinode-658614-m03_multinode-658614-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.82s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-658614 node stop m03: (2.295881018s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-658614 status: exit status 7 (467.741689ms)

                                                
                                                
-- stdout --
	multinode-658614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-658614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-658614-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-658614 status --alsologtostderr: exit status 7 (453.586763ms)

                                                
                                                
-- stdout --
	multinode-658614
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-658614-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-658614-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0315 23:41:39.949193  107690 out.go:291] Setting OutFile to fd 1 ...
	I0315 23:41:39.949728  107690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:41:39.949785  107690 out.go:304] Setting ErrFile to fd 2...
	I0315 23:41:39.949804  107690 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0315 23:41:39.950339  107690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0315 23:41:39.950728  107690 out.go:298] Setting JSON to false
	I0315 23:41:39.950797  107690 mustload.go:65] Loading cluster: multinode-658614
	I0315 23:41:39.950918  107690 notify.go:220] Checking for updates...
	I0315 23:41:39.951630  107690 config.go:182] Loaded profile config "multinode-658614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0315 23:41:39.951664  107690 status.go:255] checking status of multinode-658614 ...
	I0315 23:41:39.952115  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:39.952166  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:39.967888  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33065
	I0315 23:41:39.968438  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:39.969109  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:39.969128  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:39.969487  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:39.969745  107690 main.go:141] libmachine: (multinode-658614) Calling .GetState
	I0315 23:41:39.971525  107690 status.go:330] multinode-658614 host status = "Running" (err=<nil>)
	I0315 23:41:39.971545  107690 host.go:66] Checking if "multinode-658614" exists ...
	I0315 23:41:39.971850  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:39.971898  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:39.988053  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44211
	I0315 23:41:39.988607  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:39.989271  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:39.989290  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:39.989626  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:39.989832  107690 main.go:141] libmachine: (multinode-658614) Calling .GetIP
	I0315 23:41:39.992915  107690 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:41:39.993375  107690 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:41:39.993428  107690 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:41:39.993624  107690 host.go:66] Checking if "multinode-658614" exists ...
	I0315 23:41:39.993909  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:39.993954  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:40.009545  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35655
	I0315 23:41:40.009991  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:40.010461  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:40.010480  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:40.010827  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:40.011082  107690 main.go:141] libmachine: (multinode-658614) Calling .DriverName
	I0315 23:41:40.011367  107690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:41:40.011421  107690 main.go:141] libmachine: (multinode-658614) Calling .GetSSHHostname
	I0315 23:41:40.014586  107690 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:41:40.015001  107690 main.go:141] libmachine: (multinode-658614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2e:f9:e8", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:39:11 +0000 UTC Type:0 Mac:52:54:00:2e:f9:e8 Iaid: IPaddr:192.168.39.5 Prefix:24 Hostname:multinode-658614 Clientid:01:52:54:00:2e:f9:e8}
	I0315 23:41:40.015045  107690 main.go:141] libmachine: (multinode-658614) DBG | domain multinode-658614 has defined IP address 192.168.39.5 and MAC address 52:54:00:2e:f9:e8 in network mk-multinode-658614
	I0315 23:41:40.015193  107690 main.go:141] libmachine: (multinode-658614) Calling .GetSSHPort
	I0315 23:41:40.015421  107690 main.go:141] libmachine: (multinode-658614) Calling .GetSSHKeyPath
	I0315 23:41:40.015590  107690 main.go:141] libmachine: (multinode-658614) Calling .GetSSHUsername
	I0315 23:41:40.015762  107690 sshutil.go:53] new ssh client: &{IP:192.168.39.5 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614/id_rsa Username:docker}
	I0315 23:41:40.107181  107690 ssh_runner.go:195] Run: systemctl --version
	I0315 23:41:40.113503  107690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:41:40.129004  107690 kubeconfig.go:125] found "multinode-658614" server: "https://192.168.39.5:8443"
	I0315 23:41:40.129037  107690 api_server.go:166] Checking apiserver status ...
	I0315 23:41:40.129071  107690 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0315 23:41:40.144288  107690 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W0315 23:41:40.154510  107690 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0315 23:41:40.154567  107690 ssh_runner.go:195] Run: ls
	I0315 23:41:40.159588  107690 api_server.go:253] Checking apiserver healthz at https://192.168.39.5:8443/healthz ...
	I0315 23:41:40.164202  107690 api_server.go:279] https://192.168.39.5:8443/healthz returned 200:
	ok
	I0315 23:41:40.164239  107690 status.go:422] multinode-658614 apiserver status = Running (err=<nil>)
	I0315 23:41:40.164252  107690 status.go:257] multinode-658614 status: &{Name:multinode-658614 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:41:40.164302  107690 status.go:255] checking status of multinode-658614-m02 ...
	I0315 23:41:40.164648  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:40.164716  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:40.181062  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45589
	I0315 23:41:40.181510  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:40.181984  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:40.182003  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:40.182376  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:40.182589  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .GetState
	I0315 23:41:40.184160  107690 status.go:330] multinode-658614-m02 host status = "Running" (err=<nil>)
	I0315 23:41:40.184178  107690 host.go:66] Checking if "multinode-658614-m02" exists ...
	I0315 23:41:40.184511  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:40.184553  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:40.201825  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36919
	I0315 23:41:40.202275  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:40.202822  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:40.202842  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:40.203175  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:40.203437  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .GetIP
	I0315 23:41:40.206880  107690 main.go:141] libmachine: (multinode-658614-m02) DBG | domain multinode-658614-m02 has defined MAC address 52:54:00:82:a8:25 in network mk-multinode-658614
	I0315 23:41:40.207430  107690 main.go:141] libmachine: (multinode-658614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:a8:25", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:40:16 +0000 UTC Type:0 Mac:52:54:00:82:a8:25 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-658614-m02 Clientid:01:52:54:00:82:a8:25}
	I0315 23:41:40.207458  107690 main.go:141] libmachine: (multinode-658614-m02) DBG | domain multinode-658614-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:82:a8:25 in network mk-multinode-658614
	I0315 23:41:40.207574  107690 host.go:66] Checking if "multinode-658614-m02" exists ...
	I0315 23:41:40.207878  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:40.207921  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:40.223488  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36271
	I0315 23:41:40.224027  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:40.224502  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:40.224526  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:40.224856  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:40.225075  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .DriverName
	I0315 23:41:40.225267  107690 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0315 23:41:40.225285  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .GetSSHHostname
	I0315 23:41:40.228278  107690 main.go:141] libmachine: (multinode-658614-m02) DBG | domain multinode-658614-m02 has defined MAC address 52:54:00:82:a8:25 in network mk-multinode-658614
	I0315 23:41:40.228689  107690 main.go:141] libmachine: (multinode-658614-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:82:a8:25", ip: ""} in network mk-multinode-658614: {Iface:virbr1 ExpiryTime:2024-03-16 00:40:16 +0000 UTC Type:0 Mac:52:54:00:82:a8:25 Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:multinode-658614-m02 Clientid:01:52:54:00:82:a8:25}
	I0315 23:41:40.228730  107690 main.go:141] libmachine: (multinode-658614-m02) DBG | domain multinode-658614-m02 has defined IP address 192.168.39.215 and MAC address 52:54:00:82:a8:25 in network mk-multinode-658614
	I0315 23:41:40.228902  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .GetSSHPort
	I0315 23:41:40.229085  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .GetSSHKeyPath
	I0315 23:41:40.229230  107690 main.go:141] libmachine: (multinode-658614-m02) Calling .GetSSHUsername
	I0315 23:41:40.229355  107690 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17991-75602/.minikube/machines/multinode-658614-m02/id_rsa Username:docker}
	I0315 23:41:40.311450  107690 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0315 23:41:40.327364  107690 status.go:257] multinode-658614-m02 status: &{Name:multinode-658614-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0315 23:41:40.327404  107690 status.go:255] checking status of multinode-658614-m03 ...
	I0315 23:41:40.327854  107690 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0315 23:41:40.327902  107690 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0315 23:41:40.343343  107690 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46117
	I0315 23:41:40.343840  107690 main.go:141] libmachine: () Calling .GetVersion
	I0315 23:41:40.344405  107690 main.go:141] libmachine: Using API Version  1
	I0315 23:41:40.344429  107690 main.go:141] libmachine: () Calling .SetConfigRaw
	I0315 23:41:40.344756  107690 main.go:141] libmachine: () Calling .GetMachineName
	I0315 23:41:40.344960  107690 main.go:141] libmachine: (multinode-658614-m03) Calling .GetState
	I0315 23:41:40.346637  107690 status.go:330] multinode-658614-m03 host status = "Stopped" (err=<nil>)
	I0315 23:41:40.346654  107690 status.go:343] host is not running, skipping remaining checks
	I0315 23:41:40.346662  107690 status.go:257] multinode-658614-m03 status: &{Name:multinode-658614-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 node start m03 -v=7 --alsologtostderr
E0315 23:42:01.951482   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-658614 node start m03 -v=7 --alsologtostderr: (28.668554172s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-658614 node delete m03: (1.855464727s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.41s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-658614 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-658614 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m52.383273944s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-658614 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.94s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-658614
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-658614-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-658614-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.271453ms)

                                                
                                                
-- stdout --
	* [multinode-658614-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-658614-m02' is duplicated with machine name 'multinode-658614-m02' in profile 'multinode-658614'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-658614-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-658614-m03 --driver=kvm2  --container-runtime=crio: (44.691801737s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-658614
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-658614: exit status 80 (236.57878ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-658614 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-658614-m03 already exists in multinode-658614-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-658614-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.86s)

                                                
                                    
x
+
TestScheduledStopUnix (112.68s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-531854 --memory=2048 --driver=kvm2  --container-runtime=crio
E0315 23:58:41.951852   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-531854 --memory=2048 --driver=kvm2  --container-runtime=crio: (40.919831882s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-531854 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-531854 -n scheduled-stop-531854
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-531854 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-531854 --cancel-scheduled
E0315 23:58:58.906142   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
E0315 23:59:08.404143   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-531854 -n scheduled-stop-531854
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-531854
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-531854 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-531854
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-531854: exit status 7 (80.76805ms)

                                                
                                                
-- stdout --
	scheduled-stop-531854
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-531854 -n scheduled-stop-531854
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-531854 -n scheduled-stop-531854: exit status 7 (75.973581ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-531854" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-531854
--- PASS: TestScheduledStopUnix (112.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (221.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3097242516 start -p running-upgrade-196735 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3097242516 start -p running-upgrade-196735 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m7.379584334s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-196735 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-196735 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m32.617287059s)
helpers_test.go:175: Cleaning up "running-upgrade-196735" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-196735
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-196735: (1.163832443s)
--- PASS: TestRunningBinaryUpgrade (221.97s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-188025 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-188025 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (98.884709ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-188025] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (96.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-188025 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-188025 --driver=kvm2  --container-runtime=crio: (1m36.62025127s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-188025 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (96.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.43s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-188025 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-188025 --no-kubernetes --driver=kvm2  --container-runtime=crio: (6.071337674s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-188025 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-188025 status -o json: exit status 2 (237.281584ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-188025","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-188025
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-188025: (1.12379939s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.97s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-188025 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-188025 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.970634597s)
--- PASS: TestNoKubernetes/serial/Start (29.97s)

                                                
                                    
x
+
TestPause/serial/Start (105.42s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-033460 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-033460 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.418333762s)
--- PASS: TestPause/serial/Start (105.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-188025 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-188025 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.363495ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-188025
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-188025: (1.612026094s)
--- PASS: TestNoKubernetes/serial/Stop (1.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (46.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-188025 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-188025 --driver=kvm2  --container-runtime=crio: (46.14323526s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (46.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-188025 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-188025 "sudo systemctl is-active --quiet service kubelet": exit status 1 (209.567496ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-869135 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-869135 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (118.5619ms)

                                                
                                                
-- stdout --
	* [false-869135] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17991
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0316 00:03:43.836995  116679 out.go:291] Setting OutFile to fd 1 ...
	I0316 00:03:43.837169  116679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:03:43.837183  116679 out.go:304] Setting ErrFile to fd 2...
	I0316 00:03:43.837190  116679 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0316 00:03:43.837352  116679 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17991-75602/.minikube/bin
	I0316 00:03:43.837964  116679 out.go:298] Setting JSON to false
	I0316 00:03:43.838974  116679 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-7","uptime":9974,"bootTime":1710537450,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0316 00:03:43.839035  116679 start.go:139] virtualization: kvm guest
	I0316 00:03:43.841427  116679 out.go:177] * [false-869135] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0316 00:03:43.842795  116679 notify.go:220] Checking for updates...
	I0316 00:03:43.842806  116679 out.go:177]   - MINIKUBE_LOCATION=17991
	I0316 00:03:43.844067  116679 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0316 00:03:43.845240  116679 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17991-75602/kubeconfig
	I0316 00:03:43.846461  116679 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17991-75602/.minikube
	I0316 00:03:43.847644  116679 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0316 00:03:43.848961  116679 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0316 00:03:43.850774  116679 config.go:182] Loaded profile config "force-systemd-env-380757": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:03:43.850926  116679 config.go:182] Loaded profile config "kubernetes-upgrade-209767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0316 00:03:43.851062  116679 config.go:182] Loaded profile config "pause-033460": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0316 00:03:43.851197  116679 driver.go:392] Setting default libvirt URI to qemu:///system
	I0316 00:03:43.890425  116679 out.go:177] * Using the kvm2 driver based on user configuration
	I0316 00:03:43.891732  116679 start.go:297] selected driver: kvm2
	I0316 00:03:43.891753  116679 start.go:901] validating driver "kvm2" against <nil>
	I0316 00:03:43.891776  116679 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0316 00:03:43.893968  116679 out.go:177] 
	W0316 00:03:43.895226  116679 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0316 00:03:43.896391  116679 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-869135 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-869135" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.7:8443
name: pause-033460
contexts:
- context:
cluster: pause-033460
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-033460
name: pause-033460
current-context: pause-033460
kind: Config
preferences: {}
users:
- name: pause-033460
user:
client-certificate: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.crt
client-key: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-869135

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-869135"

                                                
                                                
----------------------- debugLogs end: false-869135 [took: 3.345647667s] --------------------------------
helpers_test.go:175: Cleaning up "false-869135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-869135
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.44s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (120.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.1925336001 start -p stopped-upgrade-684927 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.1925336001 start -p stopped-upgrade-684927 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (56.58405075s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.1925336001 -p stopped-upgrade-684927 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.1925336001 -p stopped-upgrade-684927 stop: (3.483322164s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-684927 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-684927 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m0.099746451s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (120.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-684927
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (154.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-238598 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-238598 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m34.313621458s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (154.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (127.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-666637 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-666637 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m7.745973899s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (127.75s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.82s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-313436 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-313436 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (57.823176055s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (57.82s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-238598 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [4f685cec-a4d2-48f2-b6e2-9b267cbd2cf6] Pending
helpers_test.go:344: "busybox" [4f685cec-a4d2-48f2-b6e2-9b267cbd2cf6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0316 00:08:58.905568   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
helpers_test.go:344: "busybox" [4f685cec-a4d2-48f2-b6e2-9b267cbd2cf6] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00474931s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-238598 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-666637 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [b494922e-2643-471f-bcda-1510733942e8] Pending
helpers_test.go:344: "busybox" [b494922e-2643-471f-bcda-1510733942e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [b494922e-2643-471f-bcda-1510733942e8] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004599048s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-666637 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-238598 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-238598 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-666637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-666637 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.079176681s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-666637 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [988b8366-de69-435e-ac7d-c5d42dafc4b1] Pending
helpers_test.go:344: "busybox" [988b8366-de69-435e-ac7d-c5d42dafc4b1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [988b8366-de69-435e-ac7d-c5d42dafc4b1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.00387337s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-313436 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-313436 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (695.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-238598 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-238598 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m34.973135938s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-238598 -n no-preload-238598
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (695.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (567.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-666637 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-666637 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m27.275111838s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-666637 -n embed-certs-666637
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (567.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (544.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-313436 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-313436 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m4.604640606s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-313436 -n default-k8s-diff-port-313436
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (544.89s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-402923 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-402923 --alsologtostderr -v=3: (4.303850739s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-402923 -n old-k8s-version-402923: exit status 7 (75.288511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-402923 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-143629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-143629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (58.96472234s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (116.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m56.86551513s)
--- PASS: TestNetworkPlugins/group/auto/Start (116.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E0316 00:37:11.453095   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m25.436790487s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-143629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-143629 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.449533147s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-143629 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-143629 --alsologtostderr -v=3: (10.415674716s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-143629 -n newest-cni-143629
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-143629 -n newest-cni-143629: exit status 7 (81.4932ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-143629 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (83.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-143629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-143629 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m22.743939574s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-143629 -n newest-cni-143629
E0316 00:38:57.622743   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (83.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-869135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-d9hlp" [87ba4c6f-9017-4a8a-a27e-fa4973cc7390] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-d9hlp" [87ba4c6f-9017-4a8a-a27e-fa4973cc7390] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004805095s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-f4j2j" [d2ad0c2f-82e0-4fc9-b089-da1c3f753e1d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004908152s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-869135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8sgvk" [255ef59b-872b-4df3-9c24-d9ef5bded2bf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8sgvk" [255ef59b-872b-4df3-9c24-d9ef5bded2bf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005235782s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (93.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m33.951976921s)
--- PASS: TestNetworkPlugins/group/calico/Start (93.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (109.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0316 00:38:56.984976   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:38:56.990364   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:38:57.000697   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:38:57.021113   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:38:57.061364   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:38:57.141706   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:38:57.302079   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m49.110297471s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (109.11s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-143629 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-143629 --alsologtostderr -v=1
E0316 00:38:58.263649   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-143629 -n newest-cni-143629
E0316 00:38:58.905894   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/addons-097314/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-143629 -n newest-cni-143629: exit status 2 (557.62415ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-143629 -n newest-cni-143629
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-143629 -n newest-cni-143629: exit status 2 (263.091256ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-143629 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-143629 -n newest-cni-143629
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-143629 -n newest-cni-143629
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (145.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0316 00:38:59.544122   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (2m25.902376634s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (145.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (159.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0316 00:39:07.225449   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:39:08.402798   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/functional-332624/client.crt: no such file or directory
E0316 00:39:17.465879   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:39:37.946845   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
E0316 00:39:46.108526   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.113860   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.124214   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.144558   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.184901   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.265422   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.425645   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:46.745911   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:47.386610   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:48.667356   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:51.228384   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:39:56.348789   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:40:06.589227   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:40:18.907731   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/no-preload-238598/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (2m39.314993147s)
--- PASS: TestNetworkPlugins/group/flannel/Start (159.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-xg8cq" [fe558dc8-76ac-49e6-ae89-ac3ae95583b9] Running
E0316 00:40:27.070139   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:40:29.664423   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:29.669762   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:29.680161   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:29.700523   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:29.740858   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:29.821666   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:29.982134   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006956463s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-869135 replace --force -f testdata/netcat-deployment.yaml
E0316 00:40:30.302562   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-bctq5" [45377ec4-a2c8-4846-ab26-e3c8afd2412e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0316 00:40:30.943604   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:32.223932   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
E0316 00:40:34.784171   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-bctq5" [45377ec4-a2c8-4846-ab26-e3c8afd2412e] Running
E0316 00:40:39.904348   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004649371s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-869135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-7bfc9" [7fa7aee3-f8cc-4ad5-8845-22a54ef5e870] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-7bfc9" [7fa7aee3-f8cc-4ad5-8845-22a54ef5e870] Running
E0316 00:40:50.144800   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004316884s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (99.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0316 00:41:08.030903   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/default-k8s-diff-port-313436/client.crt: no such file or directory
E0316 00:41:10.625431   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-869135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m39.491290453s)
--- PASS: TestNetworkPlugins/group/bridge/Start (99.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-869135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-hc7lf" [6ef8b4e6-341c-4bbc-a8d3-cd1e2dd9be13] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-hc7lf" [6ef8b4e6-341c-4bbc-a8d3-cd1e2dd9be13] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.003738467s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-x9996" [42507bb6-88f3-49c8-ad98-b46d2d5aa203] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005258032s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-869135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lk5gr" [dab63036-7d51-4448-a436-f38ee1d78b9b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0316 00:41:51.585899   82870 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/old-k8s-version-402923/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-lk5gr" [dab63036-7d51-4448-a436-f38ee1d78b9b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003963477s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-869135 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-869135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8kfb5" [8addfd36-311a-42a0-b1db-8a9846954319] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8kfb5" [8addfd36-311a-42a0-b1db-8a9846954319] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004197881s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-869135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-869135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                    

Test skip (39/319)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
122 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
124 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
127 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
128 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
268 TestStartStop/group/disable-driver-mounts 0.15
279 TestNetworkPlugins/group/kubenet 3.48
287 TestNetworkPlugins/group/cilium 4
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-183652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-183652
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-869135 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-869135" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.7:8443
name: pause-033460
contexts:
- context:
cluster: pause-033460
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-033460
name: pause-033460
current-context: pause-033460
kind: Config
preferences: {}
users:
- name: pause-033460
user:
client-certificate: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.crt
client-key: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-869135

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-869135"

                                                
                                                
----------------------- debugLogs end: kubenet-869135 [took: 3.317804948s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-869135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-869135
--- SKIP: TestNetworkPlugins/group/kubenet (3.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-869135 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-869135" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.61.160:8443
name: force-systemd-env-380757
- cluster:
certificate-authority: /home/jenkins/minikube-integration/17991-75602/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.50.7:8443
name: pause-033460
contexts:
- context:
cluster: force-systemd-env-380757
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:49 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: force-systemd-env-380757
name: force-systemd-env-380757
- context:
cluster: pause-033460
extensions:
- extension:
last-update: Sat, 16 Mar 2024 00:03:08 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-033460
name: pause-033460
current-context: force-systemd-env-380757
kind: Config
preferences: {}
users:
- name: force-systemd-env-380757
user:
client-certificate: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/force-systemd-env-380757/client.crt
client-key: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/force-systemd-env-380757/client.key
- name: pause-033460
user:
client-certificate: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.crt
client-key: /home/jenkins/minikube-integration/17991-75602/.minikube/profiles/pause-033460/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-869135

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-869135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-869135"

                                                
                                                
----------------------- debugLogs end: cilium-869135 [took: 3.829942616s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-869135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-869135
--- SKIP: TestNetworkPlugins/group/cilium (4.00s)

                                                
                                    
Copied to clipboard